text
stringlengths 28
2.36M
| meta
stringlengths 20
188
|
---|---|
\section{Invariance of the mutual information under spatial coupling}\label{sec:subadditivitystyle}
In this section we prove that the mutual information remains unchanged under spatial coupling in a suitable asymptotic limit
(Theorem \ref{LemmaGuerraSubadditivityStyle}).
We will compare the mutual informations of the four following variants of \eqref{eq:def_coupledSyst}. In each case, the signal $\bs$ has $n(L+1)$ i.i.d components.
\begin{itemize}
\item \emph{The fully connected:} If we choose $w = L/2$ and a homogeneous coupling matrix with elements $\Lambda_{\mu,\nu} = (L+1)^{-1}$ in \eqref{eq:def_coupledSyst}. This yields a homogeneous fully connected system equivalent to \eqref{eq:mainProblem} with $n(L+1)$ instead of $n$ variables. The associated mutual information per variable for fixed $L$ and $n$ is denoted by $i_{n, L}^{\rm con}$.
\item \emph{The SC pinned system:} This is the system studied in Section~\ref{sec:thresh-sat} to prove threshold saturation, with the pinning condition. In this case we choose $0<w < L/2$. The coupling matrix $\boldsymbol{\Lambda}$ is any matrix that fulfills the requirements in Section~\ref{subsec:SC} (the concrete example given there will do). The associated mutual information per variable is here denoted $i_{n, w, L}^{\rm cou}$. Note that $i_{n, w, L}^{\rm cou} = (n(L+1))^{-1} I_{w,L}(\tbf{S}; \bW)$
\item \emph{The periodic SC system:} This is the same SC system (with same coupling window and coupling matrix) but without the pinning condition. The associated mutual information per variable at fixed $L,w, n$ is denoted $i_{n, w, L}^{\rm per}$.
\item \emph{The decoupled system:} This corresponds simply to $L+1$ identical and independent systems of the form \eqref{eq:mainProblem} with $n$ variables each. This is equivalent to periodic SC system with $w = 0$. The associated mutual information per variable is denoted $i_{n, L}^{\rm dec}$. Note that $i_{n, L}^{\rm dec} = n^{-1} I(\tbf{S}; \bW)$.
\end{itemize}
Let us outline the proof strategy.
In a first step, we use an interpolation method twice: first interpolating between the fully connected and periodic SC systems, and then between the decoupled and periodic SC systems. This will allow to sandwich the mutual information of the periodic SC system by those of the fully connected and decoupled systems respectively (see Lemma~\ref{lemma:freeEnergy_sandwich}).
In the second step, using again a similar interpolation and Fekete's theorem for superadditive sequences, we prove that the decoupled and fully connected systems have asymptotically the same mutual information (see Lemma \ref{lemma:superadditivity} for the existence of the limit). From these results we deduce the proposition:
\begin{proposition}\label{lemma:freeEnergy_sandwich_limit}
For any $0\leq w \leq L/2$
\begin{align}
\lim_{n\to +\infty} i_{n, w, L}^{\rm per}= \lim_{n\to +\infty}\frac{1}{n}I(\tbf{S}; \bW)
\end{align}
\end{proposition}
\begin{proof}
Lemma \ref{lemma:superadditivity} implies that $\lim_{n\to +\infty} i_{n, L}^{\rm con} = \lim_{n\to +\infty} i_{n, L}^{\rm dec}$. One also notes that $i_{n, L}^{\rm dec} = \frac{1}{n} I(\tbf{S}; \bW)$. Thus the result follows from Lemma \ref{lemma:freeEnergy_sandwich}.
\end{proof}
In a third step an easy argument shows
\begin{proposition}\label{lemma:openVSclosed}
Assume $P_0$ has finite first four moments. For any $0\leq w \leq L/2$
\begin{align}
\vert i_{n, w, L}^{\rm per} - i_{n, w, L}^{\rm cou} \vert = \mathcal{O}(\frac{w}{L})
\end{align}
\end{proposition}
\begin{proof}
See Appendix \ref{appendix-pinfree}.
\end{proof}
Since $i_{n, w, L}^{\rm cou} = (n(L+1))^{-1} I_{w,L}(\tbf{S}; \bW)$, Theorem \ref{LemmaGuerraSubadditivityStyle} is an immediate consequence of Propositions \ref{lemma:freeEnergy_sandwich_limit} and \ref{lemma:openVSclosed}.
\subsection{A generic interpolation}\label{generic}
Let us consider two systems of same total size $n(L+1)$ with coupling matrices $\bold{\Lambda}^{(1)}$ and $\bold{\Lambda}^{(0)}$ supported on coupling windows $w_1$ and $w_0$ respectively.
Moreover, we assume that the observations associated with the first system are corrupted by an AWGN equals to $\sqrt{\Delta / t}\bz$ while the
AWGN corrupting the second system is $\sqrt{\Delta / (1 - t)}\bz^{\prime}$, where $Z_{ij}$ and $Z^{\prime}_{ij}$ are two i.i.d.
standard Gaussians and $t\in[0,1]$ is the \emph{interpolation parameter}. The interpolating inference problem has the form
\begin{align}
\begin{cases}
w_{i_\mu j_\nu} & = s_{i_\mu} s_{j_\nu}\sqrt{\frac{\Lambda_{\mu\nu}^{(1)}}{n}} + z_{i_\mu j_\nu}\sqrt{\frac{\Delta}{t}},\\
w_{i_\mu j_\nu} & = s_{i_\mu} s_{j_\nu}\sqrt{\frac{\Lambda_{\mu\nu}^{(0)}}{n}} + z_{i_\mu j_\nu}^\prime\sqrt{\frac{\Delta}{1-t}}
\end{cases}
\end{align}
In this setting, at $t=1$ the interpolated system corresponds to the first system as the
noise is infinitely large in the second one and no information is available about it, while at $t=0$ the opposite happens.
The associated interpolating posterior distribution can be expressed
as
\begin{align}\label{post-interp-d}
P_{t}(\bx\vert \bs, \bz, \bz^\prime) = \frac{1}{\mathcal{Z}_{\rm int}(t)} e^{-\mathcal{H}(t, \bold{\Lambda}^{(1)},\bold{\Lambda}^{(0)}))}\prod_{\mu=0}^{L}\prod_{i_\mu=1}^n P_0(x_{i_\mu})
\end{align}
where the ``Hamiltonian'' is
$\mathcal{H}_{\rm int}(t,\bold{\Lambda}^{(1)},\bold{\Lambda}^{(0)}) \defeq
\mathcal{H}(t,\bold{\Lambda}^{(1)}) + \mathcal{H}(1-t,\bold{\Lambda}^{(0)})$ with\footnote{Note that since the SC system is defined on a ring, we can express the Hamiltonian in terms of forward coupling only.}
\begin{align}
\mathcal{H}(t,\bold{\Lambda}) \defeq &\frac{t}{\Delta} \sum_{\mu=0}^L \Lambda_{\mu,\mu} \sum_{i_\mu\le j_\mu}\bigg( \frac{x_{i_\mu}^2x_{j_\mu}^2}{2n} - \frac{s_{i_\mu}s_{j_\mu}x_{i_\mu}x_{j_\mu}}{n} - \frac{x_{i_\mu}x_{j_\mu}z_{i_\mu j_\mu} \sqrt{\Delta}}{\sqrt{n t\Lambda_{\mu,\mu}}}\bigg)\nonumber\\
+&\frac{t}{\Delta} \sum_{\mu=0}^L\sum_{\nu=\mu+1}^{\mu+w}\Lambda_{\mu,\nu}\sum_{i_\mu,j_\nu=1}^{n}\bigg( \frac{x_{i_\mu}^2x_{j_\nu}^2}{2n} - \frac{s_{i_\mu}s_{j_\nu}x_{i_\mu}x_{j_\nu}}{n} - \frac{x_{i_\mu}x_{j_\nu}z_{i_\mu j_\nu} \sqrt{\Delta}}{\sqrt{nt\Lambda_{\mu,\nu}}}\bigg).
\end{align}
and $\mathcal{Z}_{\rm int}(t)$ is the obvious normalizing factor, the ``partition function''. The posterior average with respect to \eqref{post-interp-d} is denoted by the bracket notation $\langle - \rangle_t$. It is easy to see
that the mutual information per variable (for the interpolating inference problem) can be expressed as
\begin{align}
i_{\rm int}(t) \defeq - \frac{1}{n(L+1)}\mathbb{E}_{{\bf S}, {\bf Z}, {\bf Z}^{\prime}}[\ln \mathcal{Z}_{\rm int}(t)] + \frac{v^2}{4\Delta} + \frac{1}{4 \Delta n(L+1)} (2\mathbb{E}[S^4] - v^2)
\end{align}
The aim of the interpolation method in the present context is to compare the mutual informations of the systems at $t=1$ and $t=0$. To do so,
one uses the fundamental theorem of calculus
\begin{align}\label{calc}
i_{\rm int}(1) - i_{\rm int}(0) = \int_{0}^1 dt \, \frac{di_{\rm int}(t)}{dt}\,.
\end{align}
and tries to determine the sign of the integral term.
We first prove that
\begin{align}
& \frac{di_{\rm int}(t)}{dt} = \nonumber\\
&\frac{1}{4\Delta(L+1)} \mathbb{E}_{\bf S, \bf Z,{\bf Z}^{\prime}}\Big[\Big\la
-\frac{1}{n^2}\Big(\sum_{\mu=0}^L\sum_{\nu=\mu-w_1}^{\mu + w_1} \Lambda_{\mu\nu}^{(1)}
\sum_{i_\mu, j_\nu=1}^n X_{i_\mu}X_{j_\nu}S_{i_\mu}S_{j_\nu} + \sum_{\mu=0}^L \Lambda_{\mu\mu}^{(1)} \sum_{i_\mu=1}^n X_{i_\mu}^2 S_{i_\mu}^2\Big) \nonumber \\
&+ \frac{1}{n^2}\Big(\sum_{\mu=0}^L\sum_{\nu = \mu-w_0}^{\mu+w_0} \Lambda_{\mu\nu}^{(0)} \sum_{i_\mu, j_\nu=1}^n X_{i_\mu}X_{j_\nu}S_{i_\mu}S_{j_\nu}
+ \sum_{\mu=0}^L \Lambda_{\mu\mu}^{(0)} \sum_{i_\mu=1}^n X_{i_\mu}^2 S_{i_\mu}^2 \Big) \Big\ra_{t} \Big],
\label{eq:derivative_interpolation}
\end{align}
where $\la - \ra_t$ denotes the expectation over the posterior distribution associated with the interpolated Hamiltonian $\mathcal{H}_{\rm int}(t,\bold{\Lambda}^{(1)},\bold{\Lambda}^{(0)})$.
We start with a simple differentiation of the Hamiltonian w.r.t. $t$ which yields
\begin{align*}
\frac{d}{dt}\mathcal{H}_{\rm int}(t,\bold{\Lambda}^{(1)},\bold{\Lambda}^{(0)}) = \frac{1}{\Delta} \big( \mathcal{A}(t,\bold{\Lambda^{(1)}}) - \mathcal{B}(t,\bold{\Lambda}^{(0)}) \big),
\end{align*}
where
\begin{align*}
\mathcal{A}(t,\bold{\Lambda}^{(1)}) = & \sum_{\mu=0}^L \Lambda_{\mu\mu}^{(1)} \sum_{i_\mu\le j_\mu}\bigg( \frac{x_{i_\mu}^2x_{j_\mu}^2}{2n}
- \frac{s_{i_\mu}s_{j_\mu}x_{i_\mu}x_{j_\mu}}{n} - \frac{x_{i_\mu}x_{j_\mu}z_{i_\mu j_\mu} \sqrt{\Delta}}{2\sqrt{n t\Lambda_{\mu\mu}^{(1)}}}\bigg)\nonumber\\
+& \sum_{\mu=0}^L\sum_{\nu=\mu+1}^{\mu+w_1}\Lambda_{\mu\nu}^{(1)}\sum_{i_\mu,j_\nu=1}^{n}\bigg( \frac{x_{i_\mu}^2x_{j_\nu}^2}{2n}
- \frac{s_{i_\mu}s_{j_\nu}x_{i_\mu}x_{j_\nu}}{n} - \frac{x_{i_\mu}x_{j_\nu}z_{i_\mu j_\nu} \sqrt{\Delta}}{2\sqrt{nt\Lambda_{\mu\nu}^{(1)}}}\bigg)\nonumber \\
\mathcal{B}(t,\bold{\Lambda}^{(0)}) = &\sum_{\mu=0}^L \Lambda^{(0)}_{\mu\mu} \sum_{i_\mu\le j_\mu}\bigg( \frac{x_{i_\mu}^2x_{j_\mu}^2}{2n} - \frac{s_{i_\mu}s_{j_\mu}x_{i_\mu}x_{j_\mu}}{n} - \frac{x_{i_\mu}x_{j_\mu}z^{\prime}_{i_\mu j_\mu} \sqrt{\Delta}}{2\sqrt{n (1-t)\Lambda^{(0)}_{\mu\mu}}}\bigg)\nonumber\\
& \sum_{\mu=0}^L\sum_{\nu=\mu+1}^{\mu+w_0}\!\Lambda^{(0)}_{\mu\nu}\!\sum_{i_\mu,j_\nu=1}^{n}\!\bigg( \frac{x_{i_\mu}^2x_{j_\nu}^2}{2n} - \frac{s_{i_\mu}s_{j_\nu}x_{i_\mu}x_{j_\nu}}{n} - \frac{x_{i_\mu}x_{j_\nu}z^{\prime}_{i_\mu j_\nu} \sqrt{\Delta}}{2\sqrt{n(1-t)\Lambda^{(0)}_{\mu\nu}}}\bigg) .
\end{align*}
Using integration by parts with respect to the Gaussian variables $Z_{ij}$, $Z_{ij}^\prime$, one gets
\begin{align}
&\mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}[Z_{i_\mu j_\nu} \langle X_{i_\mu} X_{j_\nu} \rangle_{t}] = \sqrt{\frac{t\Lambda_{\mu,\nu}}{n\Delta}} \mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}\Big[ \langle X_{i_\mu}^2 X_{j_\nu}^2 \rangle_{t} - \langle X_{i_\mu} X_{j_\nu} \rangle_{t}^2 \Big]
\label{intfirst}
\\
&\mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}[Z^{\prime}_{i_\mu j_\nu} \langle X_{i_\mu} X_{j_\nu} \rangle_{t}] = \sqrt{\frac{(1-t)\Lambda^0_{\mu,\nu}}{n\Delta}} \mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}\Big[ \langle X_{i_\mu}^2 X_{j_\nu}^2 \rangle_{t} - \langle X_{i_\mu} X_{j_\nu} \rangle_{t}^2 \Big].
\label{eq:derivative_freeEnergy_integrationByPart}
\end{align}
Moreover an application of the Nishimori identity \eqref{eq:nishCond} shows
\begin{align}\label{eq:derivative_freeEnergy_nishimori}
\mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}[ \langle X_{i_\mu} X_{j_\nu} \rangle_{t}^2 ] = \mathbb{E}_{\bf S, \bf Z, \bf Z^{\prime}}[ \langle X_{i_\mu} X_{j_\nu} S_{i_\mu} S_{j_\nu} \rangle_{t} ].
\end{align}
Combining \eqref{eq:derivative_interpolation}-\eqref{eq:derivative_freeEnergy_nishimori} and using the fact that the SC system defined on a ring satisfies
\begin{align*}
&\sum_{\mu=0}^L \Lambda_{\mu\mu} \sum_{i_\mu\le j_\mu} x_{i_\mu} x_{j_\mu}s_{i_\mu} s_{j_\mu} + \sum_{\mu=0}^L\sum_{\nu=\mu+1}^{\mu+w}\Lambda_{\mu\nu}\sum_{i_\mu,j_\nu=1}^{n} x_{i_\mu} x_{j_\nu} s_{i_\mu} s_{j_\nu} = \\
&\frac{1}{2} \sum_{\mu=0}^L\sum_{\nu=\mu-w}^{\mu+w}\Lambda_{\mu\nu}\sum_{i_\mu,j_\nu=1}^{n} x_{i_\mu} x_{j_\nu} s_{i_\mu} s_{j_\nu} + \frac{1}{2} \sum_{\mu=0}^L \Lambda_{\mu\mu} x_{i_\mu}^2 s_{i_\mu}^2,
\end{align*}
we obtain \eqref{eq:derivative_interpolation}.
Now, define the {\it overlaps} associated to each block $\mu$ as
\begin{align}
q_\mu \defeq \frac{1}{n} \sum_{i_\mu=1}^n X_{i_\mu}S_{i_\mu}, \qquad \tilde{q}_\mu \defeq \frac{1}{n} \sum_{i_\mu=1}^n X_{i_\mu}^2 S_{i_\mu}^2.
\end{align}
Hence, (\ref{eq:derivative_interpolation}) can be rewritten as
\begin{align} \label{eq:derivative_interpolation_overlap}
\frac{di_{\rm int}(t)}{dt} = \frac{1}{4\Delta(L+1)} \mathbb{E}_{{\bf S}, {\bf Z},{\bf Z}^{\prime}} \Big[\Big\la\mathbf{q}^{\intercal} \mathbf{\Lambda}^{(0)} \, \mathbf{q} -\mathbf{q}^{\intercal}
\mathbf{\Lambda}^{(1)} \, \mathbf{q} + \frac{1}{n} \, \Big(\tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}}^{(0)}) - \tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}^{(1)}}) \Big) \Big\ra_t \Big],
\end{align}
where $\mathbf{q}^\intercal = [q_0\cdots q_L]$, $\tilde{\mathbf{q}}^\intercal = [\tilde{q}_0\cdots \tilde{q}_L]$ are row vectors and
${\rm diag} ({\mathbf{\Lambda}})$ represents the column vector with entries $\{\Lambda_{\mu\mu}\}_{\mu=0}^L$.
The coupling matrices $\mathbf{\Lambda}^{(1)}, \mathbf{\Lambda}^{(0)}$ are real, symmetric, circulant (due to the periodicity of the ring) and thus can be diagonalized in the same Fourier basis.
We have
\begin{align} \label{eq:derivative_generic_Fourier}
\frac{di_{\rm int}(t)}{dt} = \frac{1}{4\Delta (L+1)} \mathbb{E}_{{\bf S}, {\bf Z},{\bf Z}^{\prime}} \Big[ \Big\la \hat{\mathbf{q}}^{\intercal}
\big(\mathbf{D}^{(0)} - \mathbf{D}^{(1)}\big) \, \hat{\mathbf{q}} + \frac{1}{n} \, \Big(\tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}}^{(0)})
- \tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}^{(1)}}) \Big)\Big\ra_t \Big],
\end{align}
where $\widehat{\mathbf{q}}$ is the discrete Fourier transfrom of $\mathbf{q}$ and $\mathbf{D}^{(1)}, \mathbf{D}^{(0)}$ are
the diagonal matrices with the eigenvalues of $\mathbf{\Lambda}^{(1)},\mathbf{\Lambda}^{(0)}$. Since the coupling matrices are stochastic with non-negative Fourier transform, their
largest eigenvalue equals $1$ (and is associated to the $0$-th Fourier mode) while the remaining eigenvalues are non-negative. These properties will be essential in the following paragraphs.
\subsection{Applications}\label{appli}
Our first application is
\begin{lemma}\label{lemma:freeEnergy_sandwich}
Let the coupling matrix $\Lambda$ verify the requirements (i)-(v) in Sec. \ref{subsec:SC}.
The mutual informations of the decoupled, periodic SC and fully connected systems verify
\begin{align}
i_{n, L}^{\rm{dec}} \le i_{n, w, L}^{\rm per} \le i_{n, L}^{\rm con}.
\end{align}
\end{lemma}
\begin{proof}
We start with the second inequality. We choose $\Lambda_{\mu\nu}^{(1)} = (L+1)^{-1}$ for the fully connected system at $t=1$. This matrix
has a unique eigenvalue equal to $1$ and $L$ degenerate eigenvalues equal to $0$. Therefore it is clear
that $\mathbf{D}^{(0)} - \mathbf{D}^{(1)}$ is positive semi-definite and
$\hat{\mathbf{q}}^{\intercal} \big(\mathbf{D}^{(0)} - \mathbf{D}^{(1)}\big)\hat{\mathbf{q}}\geq 0$.
Moreover notice that $\Lambda_{\mu\mu}^{(0)} = \Lambda_{00}$ is independent of $L$. Therefore for $L$ large enough
\begin{align}
\tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}}^{(0)}) - \tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}^{(1)}}) =
\Big(\Lambda_{00}-\frac{1}{L+1}\Big) \sum_{\mu=0}^L\tilde{q}_{\mu} \geq 0.
\end{align}
Therefore we conclude that \eqref{eq:derivative_generic_Fourier} is positive and from \eqref{calc} $i_{n,L}^{\rm con} - i_{n, w, L}^{\rm per} \geq 0$.
For the first inequality we proceed similarly, but this time we choose $\Lambda_{\mu\nu}^{(1)} = \delta_{\mu\nu}$ for the decoupled system which has all eigenvalues equal to $1$. Therefore
$\mathbf{D}^{(0)} - \mathbf{D}^{(1)}$ is negative semidefinite so $\hat{\mathbf{q}}^{\intercal} \big(\mathbf{D}^{(0)} - \mathbf{D}^{(1)}\big)\hat{\mathbf{q}}\leq 0$. Moreover this time
\begin{align}
\tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}}^{(0)}) - \tilde{\mathbf{q}}^{\intercal} {\rm diag} ({\mathbf{\Lambda}^{(1)}}) =
\Big(\Lambda_{00}-1\Big) \sum_{\mu=0}^L\tilde{q}_{\mu} \leq 0
\end{align}
because we necessarily have $0\leq \Lambda_{00}^{(0)} \leq 1$. We conclude that \eqref{eq:derivative_generic_Fourier} is
negative and from \eqref{calc} $i_{n, L}^{\rm{dec}} - i_{n, w, L}^{\rm per}\leq 0$.
\end{proof}
The second application is
\begin{lemma}\label{lemma:superadditivity}
Consider the mutual information of system \eqref{eq:mainProblem} and set $i_n = n^{-1}I(\tbf{S}; \bW)$. Consider also $i_{n_1}$ and $i_{n_2}$ the mutual informations
of two systems of size $n_1$ and $n_2$ with $n= n_1 +n_2$.
The sequence $n i_n$ is superadditive in the sense that
\begin{align}
n_1 i_{n_1} + n_2 i_{n_2} \leq n i_n.
\end{align}
Fekete's lemma then implies that $\lim_{n\to +\infty} i_n$ exists.
\end{lemma}
\begin{proof}
The proof is easily obtained by following the generic interpolation method of Sec. \ref{generic} for a coupled system with two spatial positions (i.e. $L+1=2$). We choose
$\Lambda_{\mu\nu}^{(0)}=\delta_{\mu\nu}$, $\mu,\nu\in{0,1}$ for the ''decoupled`` system and $\Lambda_{\mu\nu}^{(1)}=1/2$ for $\mu,\nu\in{0,1}$ for the ''fully connected`` system.
This analysis is essentially identical to [\cite{guerraToninelli}] were the existence of the thermodynamic limit of the free energy for the Sherrington-Kirkpatrick mean field spin glass is proven.
\end{proof} | {"config": "arxiv", "file": "1812.02537/v5 arxiv/sections/5_interpolation_subadditivity_jeanRevision.tex"} |
TITLE: Diverging improper integral
QUESTION [6 upvotes]: When asked to evaluate $\int_{a}^{\infty}f(x)dx$, you split the interval based on the improper points.
If there is another improper point other than $\infty$, at $b$, we will write: $\int_{a}^{\infty}f(x)dx=\int_{a}^{b}f(x)dx+\int_{b}^{c}f(x)dx+\int_{c}^{\infty}f(x)dx$ and ask whether each of the integrals on the right hand side converge. If they all converge, so does the original one.
But if at least one of them diverges, the original one doesn't. What is the justification for this conclusion?
I can see that if $\int_{c}^{b}f(x)dx$ diverges, we can assume that the original integral converges and move the integrals around like this: $\int_{a}^{\infty}f(x)dx-\int_{a}^{b}f(x)dx-\int_{c}^{\infty}f(x)dx=\int_{b}^{c}f(x)dx$ then we'll get a contradiction. But what if both $\int_{c}^{b}f(x)dx$ and $\int_{a}^{b}f(x)dx$ diverge? What is the argument then?
REPLY [9 votes]: This is a good question. One way to think about the issue is that the "convergence" of your integral is not just about whether the value is finite, but also about whether the finite value is well defined. This is somewhat related to indeterminant forms where $+\infty + (-\infty)$ is not a well-defined quantity. (You can't say they cancel out, since, heuristically, $\infty - \infty = (1+\infty)-\infty = 1+\infty -\infty = 1 + (\infty - \infty)$...)
So as long as one of the integrals on the right hand side diverges, the entire algebraic expression becomes indeterminate, and hence we say the integral diverges. (Diverges doesn't necessarily mean that the value must run-off to infinity; it can just mean that the value does not converge to a definite number/expression.)
(A similar issue also crops up when summing infinite series that doesn't converge absolutely. Riemann rearrangement theorem tells you that, depending on "how" you sum the series you can get the final number to be anything you want.)
Sometimes, however, it is advantageous to try to make sense of an integral which can be split into two divergent improper integrals, but also where one can argue that there should be some natural cancellation. For example, one may want to argue that $\int_{-a}^a \frac{1}{x^3} dx$ evaluates to 0 since it is the integral of an odd function. For this kind of situations, the notion of Cauchy principal value is useful. But notice that the definition is taken in the sense of a limit that rely on some cancellation, and so much in the same way of Riemann rearrangement theorem, "how" you take the limit can affect what value you get as the end result. (This is compatible with the notion that the integral diverges; as I said above, divergence should be taken to mean the lack of a well-defined, unique convergence.)
Edit. Let me add another example of an integral that remains finite but does not converge. What is the value of $\int_{0}^\infty \sin(x) dx$? For every fixed $a > 0$, $\int_0^a\sin(x) dx = 1 - \cos(a) $ is a number between 0 and 2. But the limit as $a\to \infty$ doesn't exist! If you pick a certain way to approach $\infty$, say choose a sequence $a_n = 2\pi n$, then you'll come to the conclusion that the "limit" is 0; but if you choose $a_n = (2n + 1)\pi$, then you get the conclusion that the limit is $2$. The idea here is roughly similar: you take a left limit and a right limit approaching the improper point, and depending on how you choose your representative points (by an algebraic relation between the speed at which the left and right limits approach the improper point, say), you can get different answers. | {"set_name": "stack_exchange", "score": 6, "question_id": 13802} |
TITLE: If $n$ vectors are linearly independent, is there only one way to write a vector as a linear combination of those vectors?
QUESTION [2 upvotes]: I know the converse is true, because if you can write 0 in two ways you can keep adding 0 to get an infinite number of linear combinations that sum to the same thing.
(Sorry, it's been years since I've had linear algebra.)
REPLY [7 votes]: Technically it's at most one; for example, a collection consisting of a single non-zero vector is linearly independent, but if the vector space $V$ we're working in has dimension $>1$, there will be vectors that are not linear combinations of that vector (i.e. scalar multiples of it). If $n=\dim(V)$, then you are correct that it is actually exactly one.
If $v_1,\ldots,v_n$ are linearly independent, then by definition
$$\sum_{i=1}^nc_iv_i=0\implies c_i=0\text{ for all }i,$$
so if
$$w=\sum_{i=1}^nc_iv_i=\sum_{i=1}^nd_iv_i$$
then $$0=w-w=\sum_{i=1}^n(c_i-d_i)v_i,$$
hence $c_i-d_i=0$ for all $i$, hence $c_i=d_i$ for all $i$. Thus, if the $v_i$ are linearly independent, there is at most one way of writing a given vector as a linear combination of them.
Conversely, if there is at most one way of writing a given vector as a linear combination of the $v_i$, that is if
$$\sum_{i=1}^nc_iv_i=\sum_{i=1}^nd_iv_i\implies c_i=d_i\text{ for all }i,$$
then if
$$\sum_{i=1}^nc_iv_i=0,$$
we have
$$\sum_{i=1}^nc_iv_i=\sum_{i=1}^n0v_i\implies c_i=0\text{ for all }i,$$
so the $v_i$ are linearly independent.
REPLY [5 votes]: Yes. Any vector that can be expressed as a linear combination of linearly independent vectors can be expressed in one and only one way.
To see this, suppose that $\mathbf{v}$ can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_n$, with scalars $\alpha_i$ and $\beta_i$:
$$ \mathbf{v} = \alpha_1\mathbf{v}_1+\cdots + \alpha_n\mathbf{v}_n = \beta_1\mathbf{v}_1+\cdots + \beta_n\mathbf{v}_n.$$
Then:
$$\begin{align*}
\mathbf{0} &= \mathbf{v}-\mathbf{v} \\
&=\bigl( \alpha_1\mathbf{v}_1+\cdots + \alpha_n\mathbf{v}_n\bigr)-\bigl( \beta_1\mathbf{v}_1+\cdots + \beta_n\mathbf{v}_n\bigr)\\
&= (\alpha_1-\beta_1)\mathbf{v}_1 + \cdots + (\alpha_n-\beta_n)\mathbf{v}_n.
\end{align*}$$
Since $\mathbf{v}_1,\ldots,\mathbf{v}_n$ are linearly independent, this means $\alpha_1-\beta_1=\alpha_2-\beta_2=\cdots=\alpha_n-\beta_n = 0$, so $\alpha_1=\beta_1,\ldots,\alpha_n=\beta_n$. That is: the two expressions are actually identical.
One way to remember this is:
A set $S$ of vectors of $V$ spans $V$ if and only if every vector of $V$ can be written as a linear combination of vectors in $S$ in at least one way. A set $I$ of vectors of $V$ is linearly independent if and only if every vector of $V$ can be written as a linear combination of vectors in $I$ in at most one way. A set $B$ of vectors of $V$ is a basis if and only if every vector of $V$ can be written as a linear combination of vectors in $V$ in exactly one way. | {"set_name": "stack_exchange", "score": 2, "question_id": 87270} |
TITLE: Permutations and Combinations.
QUESTION [0 upvotes]: There are 4 red and 6 blue marbles in a bag, 3 are picked out at random, what is the probability that atleast one of them is red? My answer is 1/2
First I calculated the total number of ways that I can pick out three marbles at random: 10C3 = 120.
Then I multiplied: 4C1 * 6C2 = 60, I simplified 60/120 and got 1/2, can someone please confirm if I am correct or not, thank you in advance, I got this question off of an exam paper and I don't have an answer sheet for it, thank you.
REPLY [1 votes]: Your answer would have been correct if the question was 'find the probability that EXACTLY one of them is red'. But it says (if you wrote it correctly) that you must determine the probability that AT LEAST one is red. So this can be done by inverting the probability that NONE of them are red:
$$
P(\mbox{none are red})=\frac{6C3}{10C3}=\frac{20}{120}=\frac{1}{6}
$$
so then you get
$$
P(\mbox{at least one is red})=1-\frac{1}{6}=\frac{5}{6}
$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 593868} |
TITLE: Difficulty with a tricky matrix exponential step
QUESTION [2 upvotes]: I am having great difficulty checking a step involving operations with 2x2 matrix exponentials.
The expression I would like to simplify is
$$\lim _{x\to \infty} e^{-iHt-Vt}$$
where, for some $\epsilon, x, y \in \mathbb{R}$, we define
$$H=\begin{bmatrix}
0 & \epsilon \\
\epsilon & 0
\end{bmatrix} \text{ }\text{ }\text{ }\text{ }\text{ }
V=\begin{bmatrix}
x & 0 \\
0 & y
\end{bmatrix}$$
the undecipherable line says "we easily find that this expression equals
$$\lim _{x\to \infty} e^{-iHt-Vt}=e^{-yt}\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}$$
...". After more than an hour of work, I am not convinced it is so easy. Any assistance or ideas would be of great value.
The necessary information is all on this post, but here is an image of the source if it is helpful to anyone. That image is a standalone section. I was able to prove every step until this one (used BCH formula for A6).
Edit: Here is a previous approach which seems to have gotten to the wrong conclusion (maybe limit composition is illegal in this situation?). The last steps uses the same property as in Omnomnomnom's answer for exponentiating projectors.
$$\lim _{x\to \infty} e^{-iHt-Vt}$$
$$=\lim_{x\to \infty} (e^{-itH/x-t P_- -ytP_+ /x})^x$$
$$\to \lim_{x\to \infty} (e^{-t P_-})^x$$
$$=\lim_{x\to \infty} (I+(e^{-tx}-1)P_-)$$
$$\to P_+ \neq e^{-yt} P_+$$
where $P_-, P_+$ are diag(1,0) and diag(0,1) respectively.
REPLY [1 votes]: I am not convinced by the end of the Omno's calculation.
We assume that $t>0$. Let $A=\begin{pmatrix}-z&-ik\\-ik&0\end{pmatrix},w=\sqrt{z^2-4k^2},u=-\dfrac{1}{2}(z+w),v=-\dfrac{1}{2}(z-w)$.
Note that, when $z\rightarrow +\infty$, $u\sim -z,w\sim z$ and $v\rightarrow 0$ (the indeterminacy is here; in particular, a (simple) calculation is necessary).
$\exp(A)=B$ where
$b_{1,1}\sim \exp(u)\rightarrow 0,b_{1,2}\sim \dfrac{-ik}{z}\rightarrow 0,b_{2,2}\sim \dfrac{-u}{w}\rightarrow 1$. | {"set_name": "stack_exchange", "score": 2, "question_id": 3617895} |
TITLE: Evaluating the limit $\lim_{n\to+\infty}(\sqrt[n]{n}-1)^n$
QUESTION [6 upvotes]: Evaluate the limit
$$\lim_{n\to+\infty}(\sqrt[n]{n}-1)^n$$
I know the limit is 0 by looking at the graph of the function, but how can I algebraically show that that is the limit?
REPLY [1 votes]: Since, $\frac{\log(x)}x\le\frac1e$, we have that
$$
\begin{align}
\sqrt[n]{n}-1
&\le e^{1/e}-1\\
&\lt1
\end{align}
$$
Therefore,
$$
\begin{align}
\lim_{n\to\infty}\left(\sqrt[n]{n}-1\right)^n
&\le\lim_{n\to\infty}\left(e^{1/e}-1\right)^n\\[3pt]
&=0
\end{align}
$$ | {"set_name": "stack_exchange", "score": 6, "question_id": 576009} |
TITLE: $\sum_{k=0}^n {n \choose k} ^{2} = {2n \choose n}$ - Generating function $\sum_{k=0}^\infty \binom nk x^k = (1+x)^n$.
QUESTION [2 upvotes]: As part of a preparatory course in the contest PUTNAM, I have to show $\sum_{k=0}^n {n \choose k} ^{2} = {2n \choose n}$. I know that I can use the identity $\sum_{k=0}^n {n \choose k} {n \choose n-k}$ with the generating function $\sum_{k=0}^\infty \binom nk x^k = (1+x)^n$.
However, I am not very aged (16 years), and generating functions are unknown to me. Someone would it be kind enough to describe me in detail how to do it?
REPLY [4 votes]: You have, with the convention that $\binom{n}{k} = 0$ for $k > n$,
$$
(1+x)^n = \sum_{k=0}^\infty \binom{n}{k} x^k
$$
and
$$
(1+x)^{2n} = \sum_{k=0}^\infty \binom{2n}{k} x^k.
$$
But you also have
$$\begin{align}
(1+x)^{2n} &= (1+x)^n\cdot (1+x)^n = \left(\sum_{k=0}^\infty \binom{n}{k} x^k\right)\left(\sum_{k=0}^\infty \binom{n}{k} x^k\right) \\
&= \sum_{k=0}^\infty\sum_{\ell=0}^\infty \binom{n}{k}\binom{n}{\ell} x^kx^\ell
= \sum_{k=0}^\infty\sum_{\ell=0}^\infty \binom{n}{k}\binom{n}{\ell} x^{k+\ell} \\
&= \sum_{k=0}^\infty\sum_{j=0}^k \binom{n}{k-j}\binom{n}{j} x^{k}
\end{align}$$
By unicity of the coefficients of the generating function,
$$
\sum_{k=0}^\infty \binom{2n}{k} x^k = \sum_{k=0}^\infty\sum_{j=0}^k \binom{n}{k-j}\binom{n}{j} x^{k}
$$
implies
$$
\binom{2n}{k} = \sum_{j=0}^k \binom{n}{k-j}\binom{n}{j}
$$
for all $k$. In particular, for $k=n$,
$$
\binom{2n}{n} = \sum_{j=0}^n \binom{n}{n-j}\binom{n}{j} = \sum_{j=0}^n \binom{n}{j}^2.
$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 1473827} |
\begin{document}
\articletype{}
\title{Structure from Appearance: Topology with Shapes, without Points}
\author{
\name{Alexandros Haridis\textsuperscript{a}\thanks{CONTACT A. Haridis. Email: [email protected]}
}
\affil{\textsuperscript{a}Department of Architecture, Massachusetts Institute of Technology, 77 Massachusetts Ave, Room 10-303, Cambridge, MA 02116, USA
}
}
\maketitle
\begin{abstract}
A new methodological approach for the study of topology for shapes made of arrangements of lines, planes or solids is presented. Topologies for shapes are traditionally built on the classical theory of point-sets. In this paper, topologies are built with shapes, which are formalized as unanalyzed objects without points, and with structures defined from their parts. An interpretative, aesthetic dimension is introduced according to which the topological structure of a shape is not inherited from an ambient space but is induced based on how its appearance is interpreted into parts. The proposed approach provides a more natural, spatial framework for studies on the mathematical structure of design objects, in art and design. More generally, it shows how mathematical constructs (here, topology) can be built directly in terms of objects of art and design, as opposed to a more common opposite approach, where objects of art and design are subjugated to canonical mathematical constructs.
\end{abstract}
\begin{keywords}
Shape Topology; Structural Description; Mathematics of Shapes; Point-free Topology; Shape Grammars\end{keywords}
\section{Introduction}
This paper is about a particular type of structure, namely about \emph{topological structure} or \emph{topology}, and it is concerned with how this structure can be defined and studied in terms of interpretations of the appearance of \emph{shapes}. Appearance and interpretation are two concepts that are in many ways foreign to the development of topology in mathematics. Both are, however, inherent in art and design, in the ways designers and artists work when they explore ideas about form and composition. Interpretation is understood here in an aesthetic sense: it is the intuitive and natural act we commonly find in art and design, of describing the sensuous \lq surface\rq\;of a design object or an artwork---its appearance (\lq how it looks\rq)---according to parts we choose to perceive or make salient in it\footnote{\lq Part\rq\;and \lq interpretation\rq\;are concepts that appear in various formal systems; e.g. \cite{LeonardGoodman40, Simons87}. In this paper, they are understood pictorially, and are formalized in a particular formal system for shapes (\emph{Section 2}).}. The shapes I have in mind are the pictorial ones, formed in arrangements of lines or planes, but also the shapes of physical things, formed in arrangements of solids. Such shapes are the basis for drawings, sketches, models, visual compositions and other means of creative expression. My goal in this paper is to study topology directly with such shapes and to make the intuitive act of interpreting their appearances into parts an actual \lq mechanism\rq\;for inducing or generating topologies.
Traditionally, the objects upon which one wishes to confer a topology must be represented as sets, in one space or another; the generality of topological constructions rests on this fact \cite{Kurat72}. In general topology (point-set topology), topologies for shapes are modeled after the classical concept of \lq topological space\rq\;\cite{Munkr00}: a shape is considered a subspace of an underlying ambient space (e.g. Euclidean space), which possesses a set of points \emph{and} a predefined system of subsets containing the points; the subsets determine \lq relations between points\rq\;(Figure 1). The point-set view of shapes is convenient for most applications. It allows one to transfer constructions from the existing literature of topology in mathematics to the study of topology for shapes, without significant alterations. This is the approach followed in many areas of engineering design \cite{RosenPeters96}. Most notably, it is the basis for geometric modeling in computer graphics and Computer-Aided Design \cite{Requi77}.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure1.pdf}
\caption{Graphical representation of (a) infinite and (b) finite topological spaces. A topological space possesses a set of points and a system of \lq open sets\rq\;containing the points.}
\label{Figure1}
\end{figure}
The point-set view of shapes, however, has been characterized as unintuitive and unnatural for domains where there is a strong aesthetic component, such as architecture, design, or the visual arts (evidence from empirical studies can be found in \cite{VerstijnenHennessey98, StonesCassidy2010}; a more general, formal argument is established in \cite{JowEarlStiny2019}). In art and design, shapes are understood and manipulated as geometric objects in their own right---they are not understood in terms of other things (e.g. in terms of specialized point-sets). For design purposes, shapes are synthesized from elements that have an actual appearance and \lq physical\rq\;presentation, with no apparent subdivision scheme (sets of \lq infinitesimally small\rq\;elements of trivial or no content/extension are not naturally compatible with this). Drawings and models, constructed out of shapes, must be seen or touched to be appreciated and evaluated, as wholes or in parts, by designers and artists.
Point-set topology, even though broadly applied to various design and engineering contexts, is not necessarily meant to address spatial or aesthetic concerns. Some aspects of this incompatibility, particularly related to boundaries of shapes, are examined in \cite{Earl97}. With respect to appearance, the idea that a shape can obtain structures from different ways of interpreting its appearance into parts (the designer's or artist's \lq way of working\rq) is essentially absent. Topological investigations concerning shapes usually go in the opposite direction. They establish similarities, or correspondences, between the topologies that may underlie seemingly different shapes---so that, topologically similar but perceptually different appearances can be reduced into the same \lq structural class\rq. Classificatory studies concerning knots, braids and links \cite{Rolfsen2003}, surfaces and simplicial complexes \cite{Lefsch42, Munkr00}, are few examples of this general direction in the mathematical literature (we can detect the pursuit for topological similarity even in popular opinion, in phrases like \lq a torus is topologically the surface of a donut, or a rubber tire, or a coffee mug\rq). In \cite{Vard2017}, the reader can find a historical examination (and confirmation) of the \lq absence of appearance\rq\;in structural mathematics in general and how it has influenced morphological studies in architecture and design.
With this paper I aim to show that pictorial/spatial ideas and \lq ways of working\rq\;coming from non-mathematical areas with a strong aesthetic component (architecture, design or the visual arts), can actually acquire mathematical substance. This suggests a way of doing mathematics within art and design according to which the relevant mathematical constructs (here, topology) are built in terms of objects directly inspired and taken from art and design. As opposed to working in the more common opposite direction, where those objects have to be first subjugated to a canonical mathematical scheme or format.
The basis of this investigation is a formalization of shapes that comes from the theory of computation in design developed around the \emph{shape grammar} formalism \cite{Stiny75}. Shape grammars were originally invented as a model of computation for describing languages of shapes with certain stylistic interest (geometric paintings, sculptures, patterns, architectural plans, ornamentation \cite{StinyGips72, Knight94}). They are now the basis for a full-fledged theory of calculating in art and design \cite{Stiny2006}.
The mathematical theory of shape behind shape grammars provides an alternative to the point-set view of shapes. It is captured in the \emph{algebras of shapes} $U_i$, with a standard classification of shapes in terms of the dimensionality $i$ of their basic elements \cite{Stiny75, Stiny91}\footnote{In the literature, the notation $U_{ij}$ is sometimes used to specify the dimensionality $j$ of the \lq space\rq\;in which a shape is formed. For example, arrangements of lines in two dimensions are shapes in an algebra $U_{12}$. The index $j$ does not play a role in the context of this paper and is thus omitted (it is assumed of course that $i \leq j$).}. Technical details are both intuitive from a design and artistic point of view and formally precise in the mathematical sense. In principle, the algebras $U_i$ provide a point-free theory of shape (when lines, planes or solids are involved), equipped with a primitive concept of \emph{part} \emph{relation}. The part relation puts together \lq appearance\rq\;and \lq interpretation\rq\;at the center of the theory. We essentially have a mechanism that fixes mathematically an intuitive notion of interpretation: that shapes have parts, which are shapes embedded in them, and that those parts are not fixed and given (as members of a set or points of a topological space are given), but are ambiguous and thus can be seen or distinguished in many ways.
In the literature of shape grammars, the implications of using the algebras $U_i$ as a basis for calculation have been examined in depth (the most recent summary of this work is in \cite{Stiny2006}). Research related to the structure of shapes in algebras $U_i$ has also appeared although in fewer numbers \cite{Knight88, Stiny94, Krst96, JowEarlStiny2019}. Most of this research concentrates on spotlighting how structure intervenes when designers use Computer-Aided Design systems, and in what ways it discourages, or eliminates, fluid, open-ended manipulations of a shape's parts (e.g. \cite{Stiny94, JowEarlStiny2019}). In some publications, structures for shapes are examined in more mathematical detail \cite{Krst96}. Specifically for the subject of topology, however, previous work (mainly in \cite{Earl97}, \cite{Krst96} and \cite{Stiny94}) has not examined in full detail the possibility of using the algebras $U_i$ as an alternative foundation to the classical point-set view of shapes for doing topology. My purpose is to contribute to this end.
In this paper, I develop a framework for topology with shapes that takes the algebras $U_i$ as the underlying theory of shape. In this framework, an interpretative, aesthetic dimension is introduced in the sense that \lq topological structure\rq\;is not inherited from a predefined ambient space, but is \emph{induced} based on how a shape is interpreted into parts. \emph{Basis}, \emph{continuity}, \emph{connectedness}, and other foundational topological concepts, are formulated directly in terms of shape and topological structures defined by parts. This paper focuses on \emph{finite topological structures}, that is to say, topologies with finitely many parts---these topologies can be understood as finitistic descriptions of shapes, and are natural tools for describing designs. In the absence of a point-set substrate (a desirable effect of the algebras $U_i$), topological concepts normally defined in terms of points are now formulated only in terms of part relations, and their formulation is naturally driven by pictorial and spatial intuitions. In \emph{Section 2}, I present the algebras of shapes $U_i$, the background required for this work. The technical material on finite topology for shapes is in \emph{Sections 3} through \emph{7}.
I do not specifically focus on how theorems and results for topological spaces can be mimicked or generalized in this framework. In certain cases, it is a straightforward task; some evidence and discussion is provided. More interestingly though, in light of a point-free framework for topology and the introduction of appearance as a key factor for determining structure, I show how concepts that have been worked out for topological spaces do not always transfer naturally. Sets (or spaces) with definite subdivision into points are not categorically the same objects as shapes without definite parts. This change in the underlying object of study leads to modifications and adaptations that many times require some ingenuity to be meaningful for shapes.
The technical material here can also serve as a first introduction to point-free thinking in topology, if one wishes to do so through shapes (see \cite{Johnstone2001}, \cite{Menger40}, and \cite{PicPultr2010}, for general references to point-free topology in mathematics). At least when it comes to the finitistic case, the paper provides a much more intuitive and actually \emph{spatial} framework, compared to other point-free frameworks available in the mathematical literature (such as the theory of locales and frames, e.g. \cite{PicPultr2010}). In the final section of this paper, I discuss how the technical material contributes to studies on the mathematical structure of design objects, in art and design.
\section{Background: shapes and visual calculating}
\subsection{Shapes and algebras $U_i$}
Traditionally, the beginning point of topology is the classical theory of point-sets (e.g. \cite{Lefsch42, Munkr00, Barm2011}). In this paper, topology is built on the theory of shape that originates in the shape grammar formalism \cite{Stiny75, Stiny2006}. In this section, I treat those topics of the theory, and only those, that will be needed in \emph{Sections 3} through \emph{7} of this paper. I provide an elementary and rather informal overview, with relatively few technical arguments. For further background and more technical coverages, see \cite{Krst96, Stiny75, Stiny92, Stouffs94}.
In the shape grammar formalism, shapes are understood in a manner analogous to how artists, designers or architects create, appreciate and manipulate drawings and physical models in practice, especially in the early stages of the creative process. Thus, the theory of shape is driven by intuitions coming directly from art and design.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure2.pdf}
\caption{Basic elements for shapes in algebras $U_i$: points, lines, planes and solids.}
\label{Figure2}
\end{figure}
Shapes have finite spatial extension and are made up of basic elements: points, lines, planes and solids. These are things like those in Figure 2. The first three---points, lines, and planes---are pictorial elements, and we can easily draw them on a sheet of paper. Solids represent the shapes of real, physical things. We can visualize solids in a drawing, as in the drawing in Figure 2, but the drawing itself is made with lines and planes (not solids). Shapes made with lines, planes or solids are unanalyzed and their parts can be seen in many ways. Though shapes, in general, can be made from any combination of basic elements, either of a single kind or of different kinds, the investigations in this paper are concerned only with shapes made with basic elements of a single kind.
Shapes are formalized in algebras $U_i$, originally invented for purposes of calculation with shape grammars \cite{Stiny91}. In the algebras, shapes are classified in terms of the dimensionality $i$ of their basic elements: $i$ = 0 for points, $i$ = 1 for lines, $i$ = 2 for planes, and $i$ = 3 for solids. Some properties of those basic elements are summarized in Table 1. Other basic elements, for example, curves or surfaces, can also be considered to extend the algebras $U_i$ (\cite{Stiny2006, JowEarl2015} provide further elaboration).
\begin{table}[ht]
\tbl{Properties of basic elements.}
{\begin{tabular}{lcccc} \toprule \\
Basic element & Dimension & Boundary & Content & Part relation \\ \midrule
Point & 0 & none & none & identity \\
Line & 1 & two points & length & partial order \\
Plane & 2 & three or more lines & area & partial order \\
Solid & 3 & four or more planes & volume & partial order \\\bottomrule
\end{tabular}}
\label{basicelements-table}
\end{table}
Three things define an algebra of shapes. First, we have the shapes themselves. A shape is a finite arrangement (a set) of basic elements of a certain kind, which are \emph{maximal} with respect to each other. Second, an algebra is equipped with a \emph{part relation} denoted with $\leq$ that drives the operations of sum (+), product ($\cdot$) and difference (-) for shapes and enables recognition of parts in any given shape. And third, we have \emph{transformations} for changing a given shape into others. A standard example are the Euclidean transformations, but the algebras allow linear transformations and possibly other kinds, too. I focus on the first two of these characteristics of the algebras, for they underlie all topological constructions developed later on.
A shape is uniquely defined by the set of its maximal elements \cite{Stiny75}:
\vspace{0.1in}
\noindent \emph{The smallest set containing the biggest basic elements that combine to form the shape}.
\vspace{0.1in}
\noindent Given a shape, one can follow a precise algorithmic procedure to reduce the shape to its maximal elements. The interested reader can refer to \cite{Stiny75, Stiny2006} for technical details. An intuitive way to understand maximality is by analogy to how skilled drafters make line-drawings, either manually or with the help of a digital computer. To make a drawing, a drafter draws the \lq biggest lines\rq \;containing all lines of the drawing (this is a standard practice for students in design schools). The \emph{fewest} such lines needed to complete the drawing correspond precisely to the maximal elements of the drawing. Figure 3 shows examples of shapes made with points, lines and planes, and their corresponding maximal elements.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.92]{EmbeddedFigure3.pdf}
\caption{Shapes made with points, lines and planes, and their corresponding maximal elements.}
\label{Figure3}
\end{figure}
The maximal representation of a shape in the algebras is for purposes of finite describability. This augments the computational potential. The parts of a shape, however, are not only those that are explicitly defined by maximal elements but also those that can simply be seen or recognized in them figuratively. The part relation ($\leq$) is the main relation that enables this, for all basic elements in the algebras $U_i$.
The only point that is part of another point is the point itself. A point is like a drawing that has no proper nonempty parts, regardless of how the point looks (see also \cite{Haridis2019}). For shapes made with points in algebra $U_0$, the part relation behaves as an \emph{identity relation} (an \emph{equivalence}). One can explicitly enumerate the parts of a shape by forming sets of zero, one or more points; this is illustrated in Figure 4a, for a shape made with three points.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{EmbeddedFigure4.pdf}
\caption{(a) A shape made with three points and enumeration of its parts. (b), (c) Shapes made with lines or planes have uncountably many parts in them.}
\label{Figure4}
\end{figure}
In an algebra $U_0$, a shape and its parts form a finite Boolean algebra; the atoms are the individual points of the shape. For example, in Figure 4a, every part comes with its complement relative to the shape, and the overall set of parts is closed under finite sums and products (the operations of sum, product and relative complement for shapes are explained a little later). This algebra has for the bottom element a special shape called the \emph{empty shape}, denoted with \emph{0}, and for the top element the shape itself. The \emph{empty shape} (at times called \emph{empty part}, depending on the context), is a shape with no parts; intuitively, it corresponds to a blank sheet of paper, or an empty drawing. The Boolean algebra formed by a shape and its parts in $U_0$, corresponds to the Boolean algebra formed by a finite set and its subsets.
Shapes in an algebra $U_i$ when $i > 0$ are made up of basic elements that have uncountably many basic elements of the same kind embedded in them. That is to say, lines have parts that are lines of finite but nonzero length (e.g. Figure 5), planes have parts that are planes of finite but nonzero area, etc. The content (measure) of a basic element, other than a point, is never zero (nor is the content of its nonempty parts). For shapes in an algebra $U_i$ where $i > 0$, the part relation behaves as a \emph{partial order} (see \cite{Stiny75} for the mathematical argument). For any given shape one can \emph{trace}, or equivalently \emph{recognize} or \emph{see}, uncountably many different parts embedded in it (including, but not limited to, its maximal parts), which are shapes made up of basic elements of the same kind as the shape itself. A graphical illustration is given in Figure 4b and 4c.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure5.pdf}
\caption{The parts of a line in algebra $U_1$ are never points but lines of finite but nonzero length.}
\label{Figure5}
\end{figure}
In an algebra $U_i$ where $i > 0$, a shape and its parts form an infinite Boolean algebra. Every part of the shape has a complement relative to the shape, finite sums and products of parts determine parts of the same shape, the \emph{empty shape} is the bottom element and the shape itself is the top element. This algebra, however, is atomless. It is also not complete, because infinite sums or products of parts do not determine parts/shapes necessarily \cite[p.~208]{Stiny2006}.
Even though shapes have uncountably many parts to see in them (when $i > 0$), they are not themselves infinite objects: a shape does not come equipped with the set of its parts given all at once! Instead, a shape is a \emph{finite} object, kept in maximal element representation, but has a natural ability to be divisible into parts indefinitely (real physical drawings and models in art and design have these properties, too). Intuitively, you choose parts by tracing them or equivalently seeing them. There may be uncountably many to see, but each time you see finitely many of them---you do not see infinitely many parts all at once. This distinguishing characteristic of shapes, namely their finite describability and at the same time their potential for indefinite divisibility, is central to the topological constructions developed in this paper. A few other background concepts are needed before we proceed.
All basic elements besides points have a unique boundary (Table 1). The boundary of a basic element is made up of basic elements of exactly one dimension lower than itself (e.g. the boundary of a line is formed by two points). This leads to the following result: the boundary of a basic element is not a part of the basic element. More generally, given a shape in an algebra $U_i$, for $i > 0$: (i) the boundary of the shape is a \emph{shape} made with basic elements of dimension $i$ - 1, and (ii) this boundary shape is not a part of the original shape. (For further details on shapes and their boundaries, and a comparison with the corresponding concepts in point-set topology, see \cite{Earl97}.)
The algebras $U_i$ are closed under operations of sum (+), product ($\cdot$), and difference (-). These operations mimic very closely how drawings are created and manipulated in practice, in art and design. The operations are formally defined in terms of the part relation for shapes (details are given in \cite{Stiny75}; see also \cite{Krishn92a, Krishn92b}).
Suppose $S$ and $S'$ are two shapes, both made with the same kind of basic elements (e.g. $S$ and $S'$ are both made with points or both made with lines). Say that $S$ is part of $S'$, and denote it with $S \leq S'$, if every maximal element of the first is embedded in a maximal element of the second (more generally, under an appropriate transformation). An intuitive way to understand this is by an analogy to manual drawing: first draw the shape $S$ and then the shape $S'$; if the resulting drawing is $S'$, then $S$ is part of $S'$. A brief description of sum, product and difference for shapes now follows.
\vspace{12pt}
{\small \noindent \emph{Sum}\;\; $S$ + $S'$, is the unique shape formed by adding two shapes together; it corresponds to the act of drawing shape $S$ and then shape $S'$. The resulting shape satisfies two conditions: (i) $S$ and $S'$ are part of the sum, and every part of the sum has a part that is part of one shape or the other. If $S \leq S'$, then $S$ + $S'$ = $S'$. We also have that $S$ + $S$ = $S$, for any shape $S$.}
\vspace{12pt}
{\small \noindent \emph{Difference}\;\; $S$ - $S'$, is the unique shape formed by subtracting from $S$ all parts that are shared with $S'$; it corresponds to the manual act of erasing parts from $S$. Every part of the difference is a part of $S$ but not $S'$.}
\vspace{12pt}
{\small \noindent \emph{Product}\;\; $S \cdot S'$, is the largest part shared with both $S$ and $S'$. Alternatively, the product is formed by the difference: $S \cdot S'$ = $S$ - ($S$ - $S'$). When two shapes share no parts, $S \cdot S'$ is equal to the \emph{empty shape}. In this case, it also follows that $S$ - $S'$ = $S$ and $S'$ - $S$ = $S'$. We also have that $S \cdot S$ = $S$, for any shape $S$.}
\vspace{12pt}
\begin{table}[ht]
\tbl{Operations of sum, product and difference for shapes.}
{\begin{tabular}{ccccc} \toprule \\
$S$ & $S'$ & Sum & Product & Difference \\ \midrule\\
\includegraphics[scale=0.85]{EmbeddedTable2_Fig00.pdf} & \includegraphics[scale=0.85]{EmbeddedTable2_Fig01.pdf} & \includegraphics[scale=0.85]{EmbeddedTable2_Fig02.pdf} & \includegraphics[scale=0.85]{EmbeddedTable2_Fig03.pdf} & \includegraphics[scale=0.85]{EmbeddedTable2_Fig04.pdf} \\\\\\
\includegraphics[scale=0.95]{EmbeddedTable2_Fig05.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig06.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig07.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig08.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig09.pdf} \\\\\\
\includegraphics[scale=0.95]{EmbeddedTable2_Fig10.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig11.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig12.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig13.pdf} & \includegraphics[scale=0.95]{EmbeddedTable2_Fig14.pdf}
\\\\\bottomrule
\end{tabular}}
\label{operations-table}
\end{table}
An illustration of the three operations is given in Table 2, for shapes made with points, lines and planes. Notice that for shapes made with points, sum, product and difference coincide, respectively, with union, intersection and difference for sets.
Last, shapes have no \lq absolute\rq \;complements because the algebras $U_i$ do not have a unit \cite[pp.~205--209]{Stiny2006}, i.e. there is no \lq largest shape\rq\;that contains all others (such an all-encompassing concept would be meaningless for pictorial and spatial objects in design). A shape can have a complement only \emph{relative to} a specific shape. The relative complement of shape $S$ with respect to shape $S'$ is equal to the shape $S'$ - $S$.
\subsection{Aesthetic interpretation of shapes (and artworks)}
The part relation in the algebras $U_i$ is the driving force of calculations with shape grammars. More than this, the part relation is the formal, mathematical mechanism that enables the possibility of interpreting the appearance of a shape aesthetically.
Previously, it was mentioned that any shape is defined in terms of its maximal elements. Maximal elements, however, are not the only parts that can be recognized or perceived in a shape (whether individually or in combinations). The part relation for basic elements of dimension $i > 0$ supports seeing beyond them. The following is a consequence of that:
\vspace{0.1in}
\centerline{\emph{How a shape is defined is independent of how it is interpreted aesthetically}.}
\vspace{0.1in}
\noindent Consider the shape made with lines in Figure 6a (basic elements of dimension $i$ = 1). The maximal elements of this shape are shown in Figure 6b; there are eight lines. Independently of those maximal elements, one can see within the same shape plenty of others. For example, we can see the shape in Figure 6c (this is drawn after Wassily Kandinsky \cite[p.~138]{Kandinsky47}): each line of the shape on the right is part of a maximal element of the shape on the left. We can also see all the shapes in Figure 7; these can be embedded (seen) in the same shape in multiple ways, for example, as in Figure 8. Since basic elements of dimension greater than zero can be divided in any number of ways into any number of parts, there are indefinitely many \emph{different} parts to recognize in any given shape---a graphical example of this is in Figure 9.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure6.pdf}
\caption{(a) A shape made with lines and (b) the maximal elements of this shape. (c) An interpretation of the appearance of the shape in (a) that results in a new shape that is part of the original one.}
\label{Figure6}
\end{figure}
These examples are suggestive of a more general approach to the interpretation of the appearance of a shape that is based on seeing. Interpretation, as it is understood here, is the act of describing the sensuous \lq surface\rq \;of a shape: whatever it is that it appears to be doing on its surface. It is about \emph{aesthetic interpretation}, that is to say, an interpretation of the way the shape looks. An aesthetic interpretation can be understood as a description of the appearance of a shape in terms of certain parts---the parts that you choose to see in it. For any shape without points, there can be indefinitely many \emph{different} aesthetic interpretations of its appearance, regardless of how this shape was put together to begin with. To say this differently:
\vspace{0.1in}
\centerline{\emph{The parts used to make the shape are not necessarily the parts you choose to see in it}.}
\vspace{0.1in}
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure7.pdf}
\caption{Examples of parts embedded in the shape in Figure 6a.}
\label{Figure7}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure8.pdf}
\caption{Alternative ways of embedding shapes from Figure 7 in the same shape in Figure 6a. These embeddings happen under different Euclidean transformations.}
\label{Figure8}
\end{figure}
\noindent This naturally connects with how artworks, especially the pictorial ones, are appreciated by their different viewers. Things like drawings, sketches, paintings, visual compositions etc., are pictorial objects made with shapes the appearance of which can be appreciated in terms of the parts that a viewer perceives in them. For example, the drawing in Figure 10 by Paul Klee, is an arrangement of lines and curves. Like shapes, the appearance of this drawing can be interpreted in multiple ways, perhaps in terms of the series of parts in Figure 11. Similarly, the drawing in Figure 12 by Pablo Picasso can be considered as a composition of lines or a composition of planes, with a possible interpretation in terms of the parts shown in Figure 13.
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedFigure9.pdf}
\caption{Infinitely many parts of the shape in Figure 6a obtained by successive scalings of a part that looks like a capital letter Y (\textit{Source}: Redrawn from Stiny (2006)).}
\label{Figure9}
\end{figure}
These two examples are indicative of what is possible. In both of them, the visible surface of the artwork has been treated as a shape without points. Interestingly enough, interpretations of this kind can happen in an open ended manner for pretty much any aesthetic object (drawing, painting, sculpture, building, etc.), independently of how this object was conceived or made by its original artist. The parts that a viewer appreciates each time in an aesthetic object---the same viewer multiple times, or multiple viewers at the same time---are the parts that the viewer chooses to embed or recognize in it.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure10.jpg}
\caption{\textit{Model 32B as Unequal Halves via Linear Mediation of Straight Lines} (1931), by Paul Klee, ink on paper mounted on board, 45 x 58 cm. Extended loan and promised gift of the Carl Djerassi Trust I.}
\label{Figure10}
\end{figure}
Any interpretation of the appearance of a shape (or an artwork for that matter), can be said to impose a certain structure or organization on the appearance of the shape. This structure is a result, a byproduct, of the parts that one perceives in the shape (or artwork)---it is an invention of the viewer. This highlights two things: (i) the same shape is a host of structures, each one invented from the parts that one chooses to see in it, and (ii) any structure imposed on a shape as a result of an interpretation is not permanent but changeable.
My main goal in this paper is to define and study topology on a shape based on interpretations of its appearance into parts. More specifically, how can the parts recognized in an aesthetic interpretation be used to define a topological structure for the shape in terms of them? Given that such a topological structure is possible, what are some of the mathematical properties that can be stated about this structure? What modifications or adaptations of classical topological concepts are necessary in order to accommodate shapes in their full pictorial and spatial potential?
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.8]{EmbeddedFigure11.pdf}
\caption{Interpretations of Klee's artwork in Figure 10 terms of parts embedded in it.}
\label{Figure11}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.25]{EmbeddedFigure12.jpg}
\caption{\textit{Two Figures, Seated, after Pablo Picasso} (1920), by Pablo Picasso, stencil, 21.4 x 26.7 cm. \textcopyright\;2019 Estate of Pablo Picasso / Artists Rights Society (ARS), New York.}
\label{Figure12}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedFigure13.pdf}
\caption{Interpretations of Picasso's artwork in Figure 12 in terms of parts embedded in it. The parts are given as lines and as planes.}
\label{Figure13}
\end{figure}
\subsection{Previous work on finite topologies for shapes}
Topology for shapes in algebras $U_i$ is a relatively undeveloped research subject. In the literature of shape grammars, \cite{Stiny94} is the first study to propose a framework for topology by using \lq closure algebras\rq\;(historically, closure algebras---introduced in \cite{McKinTarski44}---were one of the first attempts in mathematics to define topology from algebraic information, without mentioning points \cite{Johnstone2001}). It is evident in this first study, that topologies for shapes can be defined as finite part-structures (structures that consist of finitely many parts). Mathematical properties of those structures were then studied in \cite{Krst96}. One of the main ideas in \cite{Krst96}, is that a \lq topology for a shape\rq\;can be described as a simple, finite lattice of (closed) parts (the part relation in the algebras induces the order in the lattice).
A more general result about the topology underlying the algebras $U_i$ was given in \cite{Earl97}. For every algebra $U_i$, there is an associated topology given by the Stone representation of Boolean algebras (e.g. \cite{Johnstone82}). Any finite topology induced on a shape will essentially have its members (parts) embedded in the underlying Stone space of the specific algebra that the shape comes from. This underlying Stone space, however, adds no descriptive restrictions. That is, it does not signify, or restrict, the parts of a shape that can be distinguished for purposes of description \cite{Earl97, Stiny2006}.
Besides the aforementioned studies, no other significant work has been produced on the subject. This paper essentially provides the first exclusive and systematic treatment of finite topology for shapes in algebras $U_i$. The focus is on shapes made of basic elements of dimension $i > 0$. The topology applicable to shapes made with points (when $i = 0$) is not discussed explicitly here; it is the subject of a different paper \cite{Haridis2019}.
The main methodological approach is to replace the earlier formulation of finite topologies through closure algebras with a new framework in which the concept of \lq open part\rq\;is the only \emph{primitive} concept assumed. All topological concepts and constructions are built in terms of this assumption. Concepts that have been introduced in earlier publications are reconfigured (e.g. \emph{Sections 4} and \emph{5}), and many foundational concepts are introduced for the first time (e.g. \emph{Sections 3}, \emph{6}, and \emph{7}).
In the following presentation, topological ideas are formulated in such a way so that they can have an immediate and natural grounding in pictorial and spatial examples. Most of the time, the intuition comes from imagining what happens when graphical figures \lq interact\rq\;spatially. From a technical standpoint, this distinction is crucial, because it is exactly what highlights the many ways in which shapes are different from point-sets and topological spaces.
\section{Shape topology}
\subsection{Definition and the concept of open part}
A \emph{topology} for a shape is a finite set $\mathcal{T}$ of parts of the shape that satisfies the following three conditions:
\begin{enumerate}
\item[(1)] The \emph{empty shape} (\emph{0}) and the shape itself are in $\mathcal{T}$.
\item[(2)] The sum (+) of an arbitrary number of parts in $\mathcal{T}$ is also in $\mathcal{T}$.
\item[(3)] The product ($\cdot$) of an arbitrary number of parts in $\mathcal{T}$ is also in $\mathcal{T}$.
\end{enumerate}
\noindent The members of a topology for a shape are shapes; they are the parts of the shape the topology recognizes. A shape without points has uncountably many parts (\emph{Section 2}). Thus, the requirement that a topology be finite implies that a shape without points can be induced with uncountably many different topologies, each one made up of \emph{finitely many} parts (infinite topologies on shapes are not considered in this paper). The collection \{\emph{0}, $S$\} is the \emph{smallest} finite topology we can have on any given shape $S$. There exists no \emph{largest} finite topology on a shape without points.
Given a shape $S$, a topology can be constructed on $S$ based on the parts recognized in an interpretation of $S$. This motivates the following recursive description of a generated topology.
\vspace{12pt}
{\small \noindent \emph{Recursive description of a generated topology}\;\; Let $\mathcal{P}$ be a finite set of nonempty shapes all of which are part of shape $S$, and such that the sum of the shapes in $\mathcal{P}$ is equal to $S$. Define the sets $\mathcal{T}_0$, $\mathcal{T}_1$, $\mathcal{T}_2$... recursively as follows:
\begin{enumerate}
\item[(i)] $\mathcal{T}_0 = \mathcal{P} \cup \{ \emph{0} \}$.
\item[(ii)] For each $k \geq 1$, define the set
\vspace{0.1in}
\centerline{$\mathcal{T}_k$ = $\mathcal{T}_{k-1} \cup \{s_1, ..., s_m\}$}
\vspace{0.1in}
\noindent where $s_1, ..., s_m$ are shapes such that, for all $i = 1, ..., m$, shape $s_i$ is either a sum of a finite number of nonempty shapes in $\mathcal{T}_{k-1}$, or a product of such shapes.
\end{enumerate}
\noindent The topology on $S$ generated by $\mathcal{P}$ is equal to $\mathcal{T}_{n}$ for a $n < \infty$ for which $\mathcal{T}_{n} = \mathcal{T}_{n+1}$, that is to say $\mathcal{T}_{n+1}$ introduces no new parts.}
\vspace{12pt}
This recursive description provides a procedure for obtaining a topology in terms of certain initial parts, in finitely many steps ($\mathcal{P}$ is finite and (i) and (ii) are determined by finite operations). A concrete example is given in \emph{Example 1}.
The same procedure can be followed even if the shape $S$ is already equipped with a topology $\mathcal{T}$. Suppose $\mathcal{P}$ is a set of newly recognized parts of $S$, none of which is already open in the existing topology for $S$. Then, let $\mathcal{T}_0$ be equal to $\mathcal{T} \cup \mathcal{P}$ and proceed recursively as described above (it is assumed that the \emph{empty shape} is already included in $\mathcal{T}$). The resulting topology will be a \emph{refinement} of the existing topology on $S$ in terms of the newly recognized parts.
A few notes on the terminology. At times I refer to the topology $\mathcal{T}$ for a shape $S$ as \emph{shape topology} (this is for distinguishing between topologies for shapes and topologies for parts, i.e. subshape topologies; see \emph{Section 3.3}). To distinguish between the parts of a shape that are members of a topology from those that are not, I use the term \emph{open parts} to refer to the parts in the topology. Thus, as a general rule of thumb, on the side of the terminology, the following two-way implication is assumed:
\vspace{12pt}
\centerline{$x$ is a recognized part of $S$ $\iff$ $x$ is \emph{open} in $S$, i.e. $x$ is in a topology for $S$.}
\vspace{12pt}
\noindent Using this terminology, one can say that a topology for a shape $S$ is a set $\mathcal{T}$ of open parts of $S$, satisfying the three basic conditions for a topology.
The notation ($S$, $\mathcal{T}$) can be used to refer to a shape $S$ that possesses a topology $\mathcal{T}$. Alternatively, the notation $OS$ can be used to refer specifically to the \emph{lattice of open parts} determined by the topology $\mathcal{T}$ (the order of this lattice is induced by the part relation ($\leq$) in the algebras $U_i$, where $i > 0$; this lattice is complete, distributive, with join and meet replaced, respectively, by sum and product for shapes). Terminological conventions brought by lattice theory are in general avoided, except for the cases where those conventions are important; see, for example, \emph{Section 6} on \emph{Continuity}.
The parts of a shape which are members of a particular topology are the \emph{only} open parts of this shape with respect to that topology. Any part of a shape is \lq visible\rq, in the sense that we can see it, but is not necessarily open. A part of a shape is \lq open\rq, if and only if, it is a member of a topology for the shape.
Finite topologies for shapes can be represented in two ways. First, by a list of open parts (with an implied ordering). Second, by a lattice diagram. In this paper, I use the latter representational method throughout. To simplify the presentation, I have chosen to omit directed lines between open parts.
\subsection{Basis for a topology on a shape}
It is possible to specify a topology for a shape by describing the entire set $\mathcal{T}$ of open parts. When this is too difficult, one specifies a smaller set of parts and describes the topology in terms of them. This smaller set is called a \emph{basis}.
Let $S$ be a shape and $\mathcal{B}$ a set of parts of $S$. $\mathcal{B}$ is a basis for a topology on $S$ if:
\begin{enumerate}
\item[(1)] The sum of the parts in $\mathcal{B}$ is equal to $S$.
\item[(2)] If $b_1$ and $b_2$ are two parts in $\mathcal{B}$, then there is a third part $b_3$ in $\mathcal{B}$, such that $b_1 \cdot b_2$ = $b_3$, or $b_1 \cdot b_2$ is the sum of parts from $\mathcal{B}$.
\end{enumerate}
\noindent If $\mathcal{B}$ satisfies conditions (1) and (2), we define the topology $\mathcal{T}$ generated by $\mathcal{B}$ as follows: A part $x$ of $S$ is said to be open (that is, to be a member of $\mathcal{T}$) if it is a basis element of $\mathcal{B}$ or if it can be described as a sum of basis elements from $\mathcal{B}$.
From this defining condition of openness it is obvious that all basis elements are themselves open. Moreover, given a part $C$ open in $S$, if $C$ is not itself a basis element, then we can choose basis elements $b_i$ embedded in $C$, i.e. $b_i \leq C$, so that $C = \sum b_i$---this fact will prove very handy in many occasions in this and in the following sections. Note the \emph{empty shape} is always included as a member of a basis. It will be usually omitted, however, whenever a basis is presented explicitly.
Let us now check that the set $\mathcal{T}$ generated by $\mathcal{B}$ is indeed a topology for $S$ (i.e. that it satisfies the three conditions given in \emph{Section 3.1}). The shape $S$ is open, by condition (1) above. Similarly, the \emph{empty shape} is open because it is a member of $\mathcal{B}$, by definition. Now, given $C = \sum C_\alpha$ an arbitrary sum of open parts, $C$ is open because for each index $\alpha$, $C_\alpha$ is either a basis element or the sum of basis elements, by definition.
Finally, we need to show that $D = \prod D_\alpha$, an arbitrary (finite, in our case) product of open parts, is open. Let us illustrate this in the simpler case where $D$ = $D_1 \cdot D_2$. Since $D_1$ and $D_2$ are open, they are either basis elements or can be expressed as sums of basis elements (according to the defining condition of openness). Without loss of generality, suppose $D_1 = \sum b_i$ and $D_2 = \sum b_j$. Then, we can write $D$ as
\vspace{12pt}
\centerline{$\sum b_i \cdot \sum b_j = \sum (b_i \cdot b_j)$,}
\vspace{12pt}
\noindent for all $i, j$. By condition (2) above, the product $b_i \cdot b_j$, for all $i, j$, is either a basis element or a sum of basis elements, so that $D$ is open as desired.
We now know that to describe a topology for a shape $S$, it suffices to present a set $\mathcal{B}$ of parts of $S$ which satisfies conditions (1) and (2). To actually construct the topology, that is to say, to compute all the open parts, the recursive procedure of \emph{Section 3.1} can be used; simply let $\mathcal{P}$ be equal to $\mathcal{B}$ and proceed in the obvious way (from conditions (1) and (2), it is easy to see that sum is the only operation needed in this case).
Often a topology for a shape can be described with more than one set of parts. That is to say, a topology may have multiple bases (for example, even the topology itself can be taken as a basis). Since the topologies we deal with here are finite, we can naturally describe them with \lq minimal\rq\;bases. Let's examine what is exactly expected from such a basis.
A minimal basis is a basis with the \emph{smallest} possible cardinality or, equivalently, a basis whose elements are contained in every other basis for the same topology (obviously, such a basis must be unique for finite structures). In principle, it must be able to describe (i.e. generate) the elements of every other basis for the same topology. This can be achieved by forming a set of parts that does \emph{not} contain a part which can be expressed as a sum of other parts from the \emph{same} set. It is not hard to see, if not immediate, that a basis that satisfies conditions (1) and (2) only, is not necessarily minimal---we can often create multiple bases for a topology, all of which satisfy the two conditions, but none of which is minimal. Even if we are presented with a minimal basis for a particular topology, we currently have no general rule for telling if it is actually minimal or not (unless of course if we compare it with every other possible basis, one by one).
To address minimality, an additional requirement is introduced. A basis $\mathcal{B}$ for a topology on $S$ is a \emph{minimal basis} if, in addition to (1) and (2), it also satisfies the following condition:
\begin{enumerate}
\item[(3)] If $b_1$ and $b_2$ are two parts in $\mathcal{B}$ and $b_1$ + $b_2$ is also in $\mathcal{B}$ then either $b_1$ + $b_2$ = $b_1$ or $b_1$ + $b_2$ = $b_2$.
\end{enumerate}
\noindent In \emph{Appendix A.1}, I show that for any topology $\mathcal{T}$, a set of parts $\mathcal{B}$ that satisfies conditions (1) through (3) is a \emph{unique minimal basis} for $\mathcal{T}$. If we are presented with a basis for a topology we can always reduce it to its (unique) \lq minimal version\rq\;by removing the elements that do not obey condition (3). Moreover, we can use these conditions to decide if a given basis is minimal or not (e.g. see \emph{Example 1}).
In the rest of this paper, I will refer to the unique minimal basis of a topology as a \emph{reduced basis}; whether a basis is supposed to be reduced or not will be made explicit every time from context.
\subsection{Subshape topology and covering}
Suppose $S$ is a shape with a topology $\mathcal{T}$. This topology can be \lq transferred,\rq \;or relativized, to a part of $S$ that may not be necessarily open in $\mathcal{T}$. The part then inherits a topology determined by the existing open parts in $\mathcal{T}$. This motivates the concept of a \emph{subshape topology}. Let $x$ be a part of $S$. The collection,
\[
\mathcal{T}_x = \{\;x \cdot C \;|\; C \textrm{ an \emph{open part} in } \mathcal{T}\;\}
\]
\noindent is a topology for $x$, called the subshape topology. The open parts in $\mathcal{T}_x$ consist of all products of open parts of $S$ with the part $x$.
Let us check that the set $\mathcal{T}_x$ is a topology. The shape $x$ and the \emph{empty shape} are in $\mathcal{T}_x$ because
\vspace{0.1in}
\centerline{\emph{0} = $x \cdot \emph{0}$ and $x$ = $S \cdot x$,}
\vspace{0.1in}
\noindent where \emph{0} and $S$ are open in $\mathcal{T}$. The fact that arbitrary sums and products of open parts are open in $\mathcal{T}_x$ follows from the equations
\vspace{0.1in}
\centerline{$\sum (C_a \cdot x) = (\sum C_a) \cdot x$ and $\prod (C_a \cdot x) = (\prod C_a) \cdot x$.}
\vspace{0.1in}
Using subshape topology, one can pick an arbitrary part $x$ of $S$ and construct a topology on $x$ by reusing the existing topology $\mathcal{T}$ on $S$. The subshape topology is a \lq relativization\rq \;of $\mathcal{T}$ with respect to $x$. If $x$ is already open in $\mathcal{T}$, then obviously $\mathcal{T}_x \subset \mathcal{T}$.
Similar to the basis for a shape topology, we can define a basis for the subshape topology. If $\mathcal{B}$ is a basis for a topology $\mathcal{T}$ on $S$, then the set,
\[
\mathcal{B}_x = \{\;x \cdot b \;|\; b \textrm{ a \emph{basis element} in } \mathcal{B}\;\}
\]
\noindent is a basis for the subshape topology $\mathcal{T}_x$ on $x$; the members of $\mathcal{B}_x$ consist of all products between $x$ and all the basis elements in $\mathcal{B}$. That the set $\mathcal{B}_x$ is indeed a basis for $\mathcal{T}_x$, is proven in \emph{Appendix A.2}. In \emph{Example 1}, I show how to construct subshape topologies based on an existing topology for a shape.
When referring to open parts, one needs to specify if those parts are open with respect to a subshape topology or with respect to a shape topology. By transitivity, if a part is open in the subshape topology $\mathcal{T}_x$ for a part $x \leq S$ and $x$ is itself open in a topology $\mathcal{T}$ for $S$, then this part is also open in $\mathcal{T}$. Unless otherwise stated, an open part will be always assumed to be \lq open\rq\;with respect to a (shape) topology for $S$.
Some further topological concepts are readily defined. A set of open parts of $S$ is said to \emph{cover} $S$, or to be a \emph{covering} of $S$, if the sum of the parts in this set is equal to $S$. For example, a basis for a topology on $S$ is automatically a covering for $S$. A \emph{subcovering} of $S$, is a subset of a covering that still covers $S$. A covering of a nonempty part $x$ of $S$ is a set of open parts of $S$, such that the sum of the parts in this set has $x$ as a part. A subcovering for $x$ is defined analogously.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure14.pdf}
\caption{(a) A shape and (b) three selected visible parts of this shape.}
\label{Figure14}
\end{figure}
A complementary idea is the intuitive notion of \lq exhaustion.\rq\; Suppose you want to recognize certain parts in a given shape $S$. If the sum of these parts is equal to $S$, then these parts can be said to \lq exhaust\rq\;the appearance of shape $S$. For example, the parts recognized in Figure 14b exhaust the shape in Figure 14a. In \emph{Section 2}, the parts recognized in all rows of Figure 8 exhaust the shape in Figure 6a. It is not, however, necessary for an interpretation to be always exhaustive; one may choose to recognize only some portion of $S$ (not the whole shape). For example, the right shape in Figure 6c, and the series of shapes in Figure 9, do not exhaust the shape in Figure 6a. In that case, the recognized parts do not sum to $S$, and hence, do not exhaust $S$. This concept of exhaustion is now connected to topology.
A topology $\mathcal{T}$ for a shape $S$ is said to \emph{exhaust} $S$, when the sum of the parts in the set $\mathcal{T} \setminus \{S\}$ is equal to $S$. (The symbol $\setminus$ stands for the operation of set-difference.) If $\mathcal{T}$ does not exhaust $S$, then the sum of the parts in the set $\mathcal{T} \setminus \{S\}$ leaves an \lq undivided\rq \;complement relative to $S$. For example, the topologies in Figure 15a, Figure 16a and 16b exhaust the shape in Figure 14a; the topology in Figure 16c does not.
When a topology $\mathcal{T}$ does \emph{not} exhaust $S$, the basis that generates $\mathcal{T}$ must include $S$ itself. In other words, if a topology does not exhaust a shape, the shape itself has to be taken as one of its basis elements.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{EmbeddedFigure15.pdf}
\caption{(a) A topology induced on the shape in Figure 14a. (b) and (c) are two bases that generate this topology, where (c) is in reduced form. (d) Three subshape topologies of the shape topology in (a).}
\label{Figure15}
\end{figure}
\vspace{12pt}
\noindent \emph{EXAMPLE 1. \;\;\;Let $S$ be the shape in Figure 14a. Suppose we choose to recognize in $S$ the three parts shown in Figure 14b. A topology $\mathcal{T}$ can be defined on $S$ in terms of these three parts using the recursive procedure given in Section 3.1. Figure 15a shows the resulting topology on $\mathcal{T}$. The sets of parts in Figure 15b and 15c both generate $\mathcal{T}$---thus, both sets are bases for $\mathcal{T}$. Only the set in Figure 15c, however, satisfies the three conditions for a reduced basis for $\mathcal{T}$. Figure 15d shows three different subshape topologies for three different parts of the shape $S$. The reader may want to verify that the structures in Figure 15d are indeed subshape topologies; the computation of the reduced bases that generate each of them is left as an exercise.}
\subsection{Comparing shape topologies}
Two or more topologies induced on the same shape can be compared with respect to the sets of parts that each one recognizes. Let $\mathcal{T}_1$ and $\mathcal{T}_2$ be two topologies induced on the same shape. Say that $\mathcal{T}_2$ is \emph{finer} (or larger) than $\mathcal{T}_1$ if every part in $\mathcal{T}_1$ is also in $\mathcal{T}_2$, or in notation, $\mathcal{T}_2 \supset \mathcal{T}_1$. In this case, $\mathcal{T}_1$ is said to be \emph{coarser} than $\mathcal{T}_2$. Two topologies $\mathcal{T}_1$ and $\mathcal{T}_2$ are said to be \emph{comparable} if either $\mathcal{T}_2 \supset \mathcal{T}_1$ or $\mathcal{T}_2 \subset \mathcal{T}_1$ holds; they are not comparable if neither inclusion holds. When both inclusions hold at the same time, $\mathcal{T}_1 = \mathcal{T}_2$ in which case $\mathcal{T}_1$ and $\mathcal{T}_2$ recognize exactly the same parts. Figure 16 shows three different topologies for the same shape. Topology $\mathcal{T}_2$ (Figure 16b) is finer than $\mathcal{T}_1$ (Figure 16a), but $\mathcal{T}_1$ and $\mathcal{T}_2$ are not comparable to $\mathcal{T}_3$ (Figure 16c).
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{EmbeddedFigure16.pdf}
\caption{Three different shape topologies induced on the shape in Figure 14a. The topologies (a) and (b) are comparable; topology (c) is not comparable to either (a) or (b).}
\label{Figure16}
\end{figure}
Comparisons of topologies are in a sense comparisons of \lq granularities.\rq \;A comparison of two topologies is often straightforward to make: simply count the number of open parts in the two given topologies and check if the topology with the fewer pieces is entirely included in the topology with the most pieces. When the topologies are instead described by their reduced bases, the following criterion can be used to compare them in terms of those bases.
Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be the reduced bases that generate topologies $\mathcal{T}_1$ and $\mathcal{T}_2$, respectively, on the same shape. Then, the following are equivalent:
\begin{enumerate}
\item[(1)] $\mathcal{T}_2$ is finer than $\mathcal{T}_1$.
\item[(2)] For every basis element $b$ in $\mathcal{B}_1$, there is a nonempty subset of basis elements in $\mathcal{B}_2$ such that the sum of the elements in this subset is equal to $b$.
\end{enumerate}
\noindent For example, criterion (2) gives the correct comparability result for topologies $\mathcal{T}_1$ and $\mathcal{T}_2$ in Figure 16, i.e. that $\mathcal{T}_2$ is finer than $\mathcal{T}_1$; for every element in the (reduced) basis of $\mathcal{T}_1$ one can find a subset of elements in the (reduced) basis of $\mathcal{T}_2$ that sums to this element, but the opposite is not possible. Criterion (2) also gives the correct compatibility results between $\mathcal{T}_1$ and $\mathcal{T}_3$, and $\mathcal{T}_2$ and $\mathcal{T}_3$ in the same figure. Namely, that $\mathcal{T}_3$ is not comparable to either $\mathcal{T}_1$ or $\mathcal{T}_2$.
Comparisons of finite topologies on shapes are interesting from an interpretative, aesthetic standpoint. A shape can be interpreted aesthetically in a variety of ways, depending on the parts that different viewers choose to see in it. The different topologies derived from those parts do not have to be comparable with one another---indeed, the parts seen by a viewer in a shape do not have to \lq match\rq \;with the parts seen by another viewer in the same shape (or even by the same viewer at a different time). Thus, a shape can be a host of non comparable, possibly conflicting and contradictory structures.
\section{Closed part, interior and closure}
We have defined what topology is for a shape without points, how to generate topologies with bases, and how to compare different topologies. Now we proceed to concepts that are useful for describing parts of a shape based on their topological characteristics.
Let $S$ be a shape with topology $\mathcal{T}$. A part $x$ of $S$ is said to be \emph{closed} whenever its complement relative to $S$, namely $S$ - $x$, is open. For example, in Figure 17, part (a) is closed but not open in the topology in Figure 15a; likewise, parts (b), (c) and (d) are closed but not open in the first, second and third topology in Figure 15d, respectively.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure17.pdf}
\caption{Parts that are closed but not open in, respectively, (a) the topology in Figure 5a, (b) the first, (c) second and (d) third topologies in Figure 5d.}
\label{Figure17}
\end{figure}
Instead of using open parts, one could just as well define a topology for a shape by giving a collection of closed parts satisfying the same three conditions given in \emph{Section 3.1}. One could then define open parts as the complements of closed parts and proceed in the obvious way. This procedure has no particular technical or conceptual advantages over the one already adopted.
A part $x$ of $S$ is said to be \emph{closed-open} if $x$ is both open and closed in $\mathcal{T}$. In other words, a closed-open part is an open part whose complement is also open. For example, in any topology the \emph{empty shape} and the shape itself are always closed-open parts. The topology in Figure 15a, the second and third topologies in Figure 15d have additional closed-open parts---they are shown in Figure 18a, 18b and 18c, respectively. The first topology in Figure 15d has no additional closed-open parts.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure18.pdf}
\caption{Closed-open parts in, respectively, (a) the topology in Figure 15a, (b) the second and (c) third topologies in Figure 15d.}
\label{Figure18}
\end{figure}
The \emph{interior} of a part $x$ of $S$ is the sum of all the open parts in $\mathcal{T}$ that are embedded in $x$. One easily notices that the interior of $x$ is the \emph{largest member} in $\mathcal{T}$ that is embedded in $x$. It follows that if $x$ is open, the interior of $x$ is equal to $x$ itself. For example, in any topology, $S$ is always equal to its interior. If $x$ is not open, the interior of $x$ is a proper, possibly empty, part of $x$.
The following is a particularly useful result concerning finite topology on shapes. For any part $x$ of $S$, there exists a unique \emph{smallest member} in $\mathcal{T}$ that has $x$ as a part.
\begin{proof}
Let $C$ be the product of all open parts of $S$ in $\mathcal{T}$ which have $x$ as a part. Since $\mathcal{T}$ is finite, this is a finite product and so $C$ is open. It is immediate that $C$ must be the smallest member in $\mathcal{T}$ that has $x$ as a part and that $C$ is unique.\end{proof}
The existence of the unique smallest member supports the following definition, which is dual to the concept of interior. The \emph{closure} of a part $x$ of $S$ is the smallest member in $\mathcal{T}$ that has $x$ as a part. For any part, its closure is determined each time by the available open parts in a topology. The following is obtained as a consequence of this: a part $x$ of $S$ is open if and only if $x$ is equal to its closure in $\mathcal{T}$. (An interpretation of this definition of closure from a design point of view can be found in \cite[p.~285]{Stiny2006}.)
To introduce some notation, let $Int x$ denote the interior of a part $x$ and $\overline{x}$ its closure. The following chained relationship describes how the interior of the part, the part itself, and the closure of the part are related:
\[
Int x \leq x \leq \overline{x}
\]
The following figure is a pictorial illustration of this relationship for an arbitrary part of the shape in Figure 14a; its interior and closure are determined based on the topology in Figure 15a.
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption1.pdf}
\label{interior-closure}
\end{figure}
One may expect to name a part that is equal to its closure a \lq closed part\rq \;(for example, this was how closed parts were defined in \cite{Stiny94}). However, this is not in sync with how finite topologies are defined in this paper. The name closed part is reserved for a part whose \emph{complement} is open (but not necessarily itself open). Additionally note that \emph{any} visible part of a shape, arbitrarily chosen, can have a closure in a topology, independently of whether this part, or its complement, are open or not.
It is clear from the given definitions of interior and closure that if a part $x$ of $S$ is open, then $x$ is equal to both its interior and its closure. This indicates that there is no real distinction between the interior of a part, the part itself, and its \lq limits,\rq\;as there is, for example, for open sets and their limit points in a topological space. For shapes, the closest concept to the notion of \lq limit\rq\;is perhaps the boundary. Indeed, shapes do have boundaries. However, as explained in \emph{Section 2.1}, the boundary of a shape is not made from the same basic elements as the shape itself, and thus it is not part of the shape. The closure of a part that is open ends up simply being the part itself; if a part is not open, its closure is the smallest open part in the topology that this part is embedded in.
Denseness can be defined in almost the same way as the homonymous concept in general topology. Say that a part $x$ of $S$ is \emph{dense} in $\mathcal{T}$ if the product of $x$ with every nonempty open part of $S$ is nonempty. An equivalent way of defining denseness is to say that part $x$ is dense if and only if the only closed part of $S$ that has $x$ as a part is $S$ itself. For example, in Figure 15d the second and third subshape topologies are defined for dense parts of the shape in Figure 14a; the first subshape topology, in the same figure, is not. Evidently, for any shape and for any finite topology for it, one can always find uncountably many dense parts with respect to that topology.
\section{The \lq space\rq \;of a shape relative to a topology}
The set of points onto which one induces a topological structure is known in general topology as \emph{space}. A space can be infinite, such as the real number line $\mathbf{R}$ or the real plane $\mathbf{R}^2$, or finite, such as the set of labelled vertices of a finite graph. No matter the choice of the space, it is assumed that the points are given all at once, ahead of time. Moreover, the points of the space are absolute, in the sense that they are invariant under different topologies. For example, regardless if the real plane $\mathbf{R}^2$ is equipped with a metric topology or with the standard topology generated by open rectangles, the points of $\mathbf{R}^2$ are the same. Is this also true for shapes and shape topologies?
The first publication that considered the notion of \lq space\rq\;in conjunction with finite topology on shapes in algebras $U_i$ is \cite{Krst96}. As it is rightly pointed out there, for any given shape, an underlying space can be determined only \emph{relative to} a specific topology. In other words, a topology induced on a shape gives rise to its own underlying space and thus different topologies for the same shape will give rise to different spaces. To actually define the \lq space\rq \;of a shape relative to a topology, \cite[p. 128]{Krst96} suggests mathematical machinery originating in the theory of locales (point-free topology, e.g. \cite{PicPultr2010}). Here, an equivalent, albeit much simpler, approach is suggested, using only material developed in \emph{Section 3} and without appeal to extra machinery: If $S$ is a shape and $\mathcal{T}$ a topology for $S$, the \emph{space} underlying $S$ relative to $\mathcal{T}$ is precisely the reduced basis $\mathcal{B}$ that generates $\mathcal{T}$, and the basis elements of $\mathcal{B}$ may be called the \emph{points} of this space (i.e. the \lq points\rq \;of $S$ relative to $\mathcal{T}$; the points obtained from this approach correspond to the f-prime elements of a topology in the formalization of topologies for shapes within locale theory that \cite{Krst96} follows).
We can cast any topology $\mathcal{T}$ for a shape into its space theoretical version, say $\mathcal{T}^*$, using the reduced basis for $\mathcal{T}$. This is easily shown in an example.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{EmbeddedFigure19.pdf}
\caption{A topology induced on the space for the shape in Figure 14a. The space and the topology are constructed relative to the topology in Figure 15a. Each member of this topology is a set and corresponds to an open part from the original topology.}
\label{Figure19}
\end{figure}
Suppose $S$ is the shape in Figure 14a and $\mathcal{T}$ the topology for $S$ in Figure 15a. The space of $S$ relative to $\mathcal{T}$ is the reduced basis $\mathcal{B}$ in Figure 15b. Let $C$ be any nonempty open part in $\mathcal{T}$. Define the set of elements of $\mathcal{B}$ embedded in $C$,
\[
B_C = \{\; b \in \mathcal{B} \;|\; b \leq C\;\}.
\]
Then, the collection $\{B_C\}_{C \in \mathcal{T}}$, with the empty set ($\emptyset$) included, forms a topology $\mathcal{T}^*$ on the space of $S$. This topology is shown graphically in Figure 19. Notice that each member $B_C$ of $\mathcal{T}^*$ is no longer a shape but a set; the elements in each set sum to the open part $C$ coming from $\mathcal{T}$. Thus, if we compute the sum of the elements in all sets (and map the empty set to the \emph{empty shape}), the resulting set of shapes will be equal to the original topology $\mathcal{T}$. By construction, $\mathcal{T}$ and $\mathcal{T}^*$ are isomorphic structures.
For any shape, there is no single absolute space underlying all finite topologies that can be induced on it. Rather, every different topology induced on a shape specifies its own underlying space via its reduced basis. In contrast, for shapes made with points, the points of the shape stay absolute and independent of the topologies induced on it, just like in topology for finite sets \cite{Haridis2019}.
\section{Continuity}
\subsection{Mappings between shapes}
Continuity is an important concept and one of the foundational ones in topology. To study continuity, one needs some appropriate notion of mapping (or function) between the objects considered.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure20.pdf}
\caption{Pictorial illustration of mappings between shapes.}
\label{Figure20}
\end{figure}
A mapping between two shapes can be thought of as a certain \lq rule of assignment\rq\;that assigns/maps \emph{every part} of one shape to a part of the other shape. More intuitively, a mapping is a description of a \emph{spatial transformation} of the parts of one shape into parts of another shape. To emphasize this spatial aspect of mappings we represent them with graphical diagrams, such as those in Figure 20.
A mapping may describe how a shape (the whole shape), or part(s) of it, is transformed by rotation, reflection, translation, or a linear transformation (or other known transformations) into another shape; for example, the mappings in Figure 20a. A mapping may also describe how a shape is transformed to another by adding something to it (Figure 20b, top), or by subtracting (erasing) something from it (Figure 20b, bottom). A mapping may also take a shape into itself untouched, in which case we have a special kind of mapping known as the \emph{identity} mapping (Figure 21b). Mappings between shapes are used all the time in practice in art and design. They are one of the most popular ways of creating something new out of something already known.
A \emph{mapping} from a source shape $S$ into a target shape $S'$ is written as
\vspace{12pt}
\centerline{ $f: S \rightarrow S'$.}
\vspace{12pt}
\noindent If $x$ is a part of $S$, denote by $f(x)$ the (unique) image of $x$ determined under $f$; the image of $x$ is a part $y$ of $S'$, such that $f(x) = y$. The mapping $f$ describes how the parts of $S$ are spatially changed or transformed into the full target shape $S'$, so that the image of $S$ under $f$ is equal to $S'$, i.e. $f(S) = S'$.
New mappings between shapes can be formed from given ones. For example, a new mapping can be formed by restricting the target image of $f$ to a \emph{proper part} of $S'$. In particular, the mapping $h: S \rightarrow S^{+}$, where $S^{+}$ is a proper part of $S'$ and $h(S) = S^{+}$, describes how $S$ is transformed to some part $S^{+}$ of $S'$, but not to the full shape $S'$. The \emph{composite} of two given mappings can be also formed in the obvious way: given $f: S \rightarrow S'$ and $g: S' \rightarrow S''$, the composite of the two is the mapping $(g \circ f): S \rightarrow S''$ defined by the equation $(g \circ f)(x) = g(f(x))$, for all parts $x$ of $S$.
If $y$ is a part of $S'$, denote by $f^{-1}(y)$ the \emph{inverse image} or \emph{preimage} of $y$. Note, here notation $f^{-1}$ represents an \emph{operation}, namely, the operation of \lq preimage\rq. It should not be confused with the \lq inverse mapping\rq\;of $f$ (i.e. the mapping $f^{-1}: S' \rightarrow S$).
In set theory, we have that the image of a set (or of a subset) is a set, and the preimage of set (or of a subset) is also a set. A desired scenario for shapes would be: the image of a shape (or of a part) is a shape (as we already said), and the preimage of a shape (or of a part) is also a shape. It turns out that preimages of shapes don't behave as expected. Extra care is needed to obtain a definition suitable for our purposes. The definition we seek can be grounded nicely in a simple pictorial example.
Suppose that the following mapping between two shapes $S$ and $S'$
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption2.pdf}
\label{map-Example}
\end{figure}
\noindent is described by a mapping $f: S \rightarrow S'$, defined by $f(x) = x + A$, for any part $x$ of $S$, so that $f(S) = S'$. The symbol $A$ represents the shape
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption3.pdf}
\label{shape-A}
\end{figure}
\noindent that is added to $S$ to produce $S'$. Then, suppose we ask for the preimage of this part $y$ of $S'$,
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption4.pdf}
\label{mapPart-Y}
\end{figure}
\noindent Following an approach analogous to the classical definition of preimages in set theory (e.g. \cite{Munkr00}), we need to define a set of parts of shape $S$ all of which have the following property: the image of each part in this set under $f$ is embedded in the part $y$ (notice here how \lq inclusion of points in subsets\rq\;is changed to embedding of parts). This approach will give us a set that consists of uncountably many parts of $S$. A section of this set is shown graphically below
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption5.pdf}
\label{infinite-set}
\end{figure}
\noindent The problem that appears immediately with this approach is that the resulting preimage is a set (of parts), not a shape.
The resolution is simple enough and it reflects our visual expectations. We will instead define the preimage $f^{-1}(y)$ of a given part $y$ of $S'$ as:
\vspace{12pt}
\centerline{ \emph{The largest part of $S$ whose image under $f$ is embedded in $y$}.}
\vspace{12pt}
\noindent In the above example, this largest part is obviously the third part in the set given graphically. Moreover, it is not hard to see that the largest part in this set is equal to the \emph{sum} of all the parts in the set. Although it may be a sum over uncountably many parts, this sum will always result to a shape (if the preimage exists) because the largest part is always a member of this set and covers every other part possibly in it.
To convert the definition of the preimage in terms of the \lq largest part\rq\;into notation, first define the following (partially ordered) set of parts
\vspace{12pt}
\centerline{$U_y = \{\; x \;|\; x \leq S \; \textrm{and} \; f(x) \leq y\;\}$}
\vspace{12pt}
\noindent for a given part $y$ of $S'$. Then the preimage of part $y$ is the \lq maximum\rq\;element of $U_y$,
\vspace{12pt}
\centerline{$f^{-1}(y) = \sup U_y$.}
\vspace{12pt}
\noindent This maximum element is the smallest part in $U_y$ greater than any other part in $U_y$.
Such an element will always exist if the given part $y$ has a preimage. Moreover, if there is at least one nonempty part in the preimage set $U_y$, then there must be infinitely many other parts, including the \emph{empty part} (consider that one can always find infinitely many nonempty parts within a given one). The preimage set $U_y$ may also have one part only, namely the empty one. We thus have that the preimage of a given part $y$ of $S'$ is defined (and it is a shape), if and only if, the set $U_y$ is nonempty; it is not defined, when the set $U_y$ is empty---this, intuitively speaking, mimics an all-or-nothing situation. Consequently, when the set $U_y$ is empty, we will say that the preimage $f^{-1}(y)$ of part $y$ is \emph{undefined}. This is another point where shapes diverge from sets with respect to the preimage operation. For sets, if the preimage set is empty, we say that the preimage is the \emph{empty set} ($\emptyset$). It is not possible to copy this statement for shapes, however, because the \emph{empty set} is a set, not a shape, and it is \emph{not} the same kind of object as the \emph{empty shape} (an empty set is not an empty drawing).
Let us state (without proof) that the image operation is \emph{order-preserving} (or \emph{monotone}):
\vspace{12pt}
\centerline{$x \leq x' \implies f(x) \leq f(x')$,}
\vspace{12pt}
\noindent that is, it preserves the part relation (when defined) between any two parts $x$, $x'$ of $S$. The same is true for the preimage operation, which we can write as a mapping $g: S' \rightarrow S$ defined as $g(y) = f^{-1}(y)$ for all parts $y$ of $S'$, i.e. $y \leq y' \implies g(y) \leq g(y')$. Of course, the latter implication holds when the preimage exists for \emph{all} parts $y$ of $S'$.
Not all mappings $f$, however, yield a corresponding mapping $g$ which represents the preimage operation. There may be parts of $S'$ with undefined preimages. As an illustration, the preimages of the following three parts are undefined
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption6.pdf}
\label{parts-no-preimage}
\end{figure}
\noindent in the example given previously with the mapping $f(x) = x + A$, and in fact there are uncountably many other parts of $S'$ that don't have preimages.
Now, if the preimage operation is indeed defined for all the parts of $S'$, the following connection between images and preimages can be established for any part $x$ of $S$ and any part $y$ of $S'$ (with a slight abuse of notation, I insert $f^{-1}$ in place of $g$):
\vspace{12pt}
\centerline{$f(x) \leq y \iff x \leq f^{-1}(y)$.}
\vspace{12pt}
\begin{proof}
The desired result is immediate from how preimages are defined, and from the fact that the image and preimage operations are order-preserving. If the image of $x$ is embedded in $y$, then $x$ must be part of $f^{-1}(y)$, which is the largest part of $S$ whose image is embedded in $y$, that is $x \leq f^{-1}(y)$. On the other hand, if $x$ is part of $f^{-1}(y)$ then the image of $x$ must be embedded in $y$, that is $f(x) \leq y$.
\end{proof}
If the above connection between images and preimages holds for a particular mapping, then (and only then) we obtain the following two convenient embedding relations:
\vspace{12pt}
\centerline{$f(f^{-1}(y)) \leq y$ and $x \leq f^{-1}(f(x))$}
\vspace{12pt}
\noindent for any part $x$ of $S$ and any part $y$ of $S'$ (a proof is omitted here).
It is interesting to examine the conditions under which a mapping has preimages for all parts $y$ of $S'$. One approach is to classify the different types of mappings that may exist between two shapes in terms of \emph{schemas} \cite{Stiny2006}, which are generalized classes of mappings between shapes; this is, however, out of the scope of this paper.
\subsection{Continuous mappings}
A mapping is defined independently of the topologies induced on the two shapes it connects. A mapping only describes how parts of a shape change into parts of another shape. The continuity of the mapping, on the other hand, is a topological issue and it depends on the specific topologies induced on the two shapes.
In this section, I formulate a definition of continuity for shapes that broadly captures the concept of \lq structure preservation\rq. A mapping between two shapes is expected to be continuous in a structural (topological) sense, when it preserves relations between the open parts of the two shapes it connects. Intuitively, we want the open parts of one shape to somehow \lq continue\rq\;to be related in the same way when a continuous mapping is applied.
The relations between the open parts in a given topology are in principle algebraic, and are captured in its lattice of open parts. This highlights the algebraic nature of the topologies involved, and provides for a useful definition of continuity.
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedFigure21.pdf}
\caption{Continuity with an identity mapping. (a) A shape $S$, (b) a mapping $f$ from $S$ to itself. $S'$ is the shape $S$ induced with the topology in (d), and $S$ is the same shape induced with the topology in (c).}
\label{Figure21}
\end{figure}
Let $S$ and $S'$ be two shapes and, respectively, $OS$ and $OS'$ be their lattices of open parts. A mapping $f: S \rightarrow S'$ is \emph{continuous} if the mapping $f^*: OS' \rightarrow OS$, defined as $f^*(D) = f^{-1}(D)$, for every open part $D$ in $OS'$, is a \emph{lattice homomorphism} that preserves the top element.
Applications of this definition are in \emph{Examples 2a and 2b}. It is implied in the definition above that, when $f$ is continuous, the preimage of every open part in $OS'$ is an open part in $OS$ (a result analogous to topological spaces). As a lattice homomorphism, $f^*$ must preserve sums and products of any two open parts of $OS'$:
\vspace{12pt}
\centerline{$f^*(C \cdot D) = f^*(C) \cdot f^*(D)$ and $f^*(C + D) = f^*(C) + f^*(D)$.}
\vspace{12pt}
\noindent From those two equations, we can derive that the mapping $f^*$ is order-preserving:
\vspace{12pt}
\centerline{If $C \leq D$ in $OS'$ $\implies f^*(C) \leq f^*(D)$ in $OS$.}
\vspace{12pt}
\noindent The proof of this statement is analogous to that for lattices in general (notice that $C \leq D$ implies $C \cdot D = C$ and $C + D = D$).
That $f^*$ preserves the top element, means that it maps $S'$ back to $S$, i.e. $f^*(S') = S$. However, $f^*$ need not necessarily map the bottom elements. While $f^*(0)$ will act as a bottom element for every element in the image of $f^*$ (it follows from the fact that $f^*$ preserves products), it need not always be the \emph{0}; we do not need to have $f^*(\emph{0}) = \emph{0}$ (see \emph{Example 2}). Spatially speaking, the preimage of the \emph{empty part} may in fact be a nonempty part (What happens when a mapping erases a part?). One example of a mapping that, if continuous, will always preserve both the top and bottom elements is the identity mapping (e.g. \emph{Example 2a}).
In general, $f^*$ is a \emph{many-to-one} mapping; two or more open parts in $OS'$ can have the same preimage in $OS$. Depending on the mapping $f$, however, $f^*$ can be one-to-one, too, in which case we have \emph{structure embedding} (a one-to-one lattice homomorphism).
The following (immediate) observation is analogous to \lq closure under image\rq\;in topological spaces for continuous functions. Let $f: S \rightarrow S'$ be a continuous mapping between shapes $S$ and $S'$. Then
\vspace{12pt}
\centerline{For every part $x$ of $S$, one has $f(\overline{x}) \leq \overline{f(x)}$.}
\vspace{12pt}
\begin{proof}
Let $D$ = $\overline{f(x)}$ be the closure of $f(x)$ in the topology for $S'$. Since $f$ is continuous by assumption, $f^*(D) = f^{-1}(D)$ is open in the topology for $S$. Then $x$ is a part of $f^{-1}(D)$, because $f^{-1}(D)$ is the largest part of $S$ whose image is embedded in $D$. We thus have $x \leq \overline{x} \leq f^{-1}(D)$ or $f(\overline{x}) \leq f(f^{-1}(D)) \leq D$, so that $f(\overline{x}) \leq \overline{f(x)}$.
\end{proof}
\vspace{12pt}
\noindent \emph{EXAMPLE 2.\;\;\;(a) Let $\mathcal{T}$ and $\mathcal{T}'$ be two comparable topologies for the shape in Figure 21a. Suppose $OS$ and $OS'$ are their lattices of open parts, shown in Figure 21c and 21d, respectively. For ease of reference, call $S$ the shape in Figure 21a induced with topology $\mathcal{T}$ and $S'$ the same shape induced with the topology $\mathcal{T}'$. Define the mapping $f: S' \rightarrow S$ to be the identity map $f(y) = y$, for every part $y$ of $S'$, which is shown diagrammatically in Figure 21b. Mapping $f$ is not continuous. The preimage $f^{-1}(C)$ is not open in $\mathcal{T}'$ for all parts $C$ open in $\mathcal{T}$, and so a homomorphism from $OS$ to $OS'$ is not defined. On the other hand, the inverse mapping of $f$, namely, $f^{-1}: S \rightarrow S'$ is continuous, because $(f^{-1})^{-1}(D) = f(D)$ is open in $\mathcal{T}$, for all parts $D$ open in $\mathcal{T}'$, i.e. a homomorphism is formed from $OS'$ to $OS$. Because the two topologies are defined on the same shape, this implies that every open part in $\mathcal{T}'$ is also an open part in $\mathcal{T}$. Hence, $\mathcal{T}$ is finer than $\mathcal{T}'$.}
\emph{(b) Let $f: S \rightarrow S^{+}$ be the mapping shown in Figure 22a, defined as}
\vspace{12pt}
\centerline{\emph{$f(x) = x - B,$}}
\vspace{12pt}
\noindent \emph{for every part $x$ of $S$, and the symbol $B$ represents the following shape}
\begin{figure}[!ht]
\centering
\includegraphics{EmbeddedNoCaption7.pdf}
\label{PartB-sub}
\end{figure}
\noindent \emph{that is subtracted from $S$ to get $S^{+}$. Suppose the shapes $S$ and $S^{+}$ are induced with the topologies in Figure 22b and 22c, respectively. Then, the mapping $f$ is continuous. Moreover, notice that $f^*(S^{+}) = S$ but $f^*(\emph{0}) = B \neq \emph{0}$.}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.92]{EmbeddedFigure22.pdf}
\caption{(a) An example of a continuous mapping $f$, from a shape $S$ to a proper part $S^+$ of a shape $S'$. The shapes $S$ and $S^+$ possess, respectively, the topologies shown in (b) and (c).}
\label{Figure22}
\end{figure}
\section{Topological connectedness}
\subsection{Visual versus structural connectedness}
The last topic of this paper is about topological connectedness. In general topology, connectedness is the study of how topological spaces \lq split\rq\;into separate pieces, and is perhaps one of the very few topics that have a, more or less, point-free formulation (it is still set-theoretical, but most of the time it can work without mentioning points; see \cite{Munkr00} for the required background). It may seem that shapes can have an exact counterpart, and indeed the basics are as in topological spaces. As we focus, however, on how appearance interacts with connectedness, the results are not as one would expect them to be (at least if one has only point-sets in mind, and not drawings, models and other spatial material).
First, let's define topological connectedness by mimicking the classical version. Let $S$ be a shape and $\mathcal{T}$ a topology for $S$. By \emph{separation} of $S$, define a pair $C$, $D$ of nonempty disjoint open parts in $\mathcal{T}$, such that $C$ + $D$ = $S$. Say that $S$ is \emph{topologically connected}, or just \emph{connected}, if there exists no separation of $S$ in $\mathcal{T}$.
Connectedness, as formulated above, can apply not only to shapes but also to parts. For the subshape topology (\emph{Section 3.3}) associated with a nonempty part of a shape, we have an additional formulation (given here without proof).
Let $S$ be a shape with topology $\mathcal{T}$. If $x$ is a nonempty part of $S$ with subshape topology $\mathcal{T}_x$, a separation of $x$ is a pair of disjoint nonempty parts $A$ and $B$ such that $A$ + $B$ = $x$. The part $x$ is connected, if there exists no separation of $x$ in $\mathcal{T}_x$.
Note the parts $A$, $B$ that form the separation are open parts in $\mathcal{T}_x$, but don't have to be open in the original topology $\mathcal{T}$ on $S$. The analogous formulation of connectedness for subspaces in general topology, requires the subsets forming the separation to not only be disjoint but to also not share limit points, i.e. each must be disjoint from the closure of the other \cite{Munkr00}. This is automatically granted for disjoint open parts of shapes, because open parts are equal to their closures (\emph{Section 4}).
We can examine some of the topologies given so far. The shape in Figure 14a with the topology in Figure 15a is disconnected. The same shape is connected with the topologies in Figure 16a and 16c, but disconnected with the topology in Figure 16b. The shape in Figure 21a, with either one of the topologies in Figure 21c and 21d, is connected. The shape in the first subshape topology in Figure 15d is connected. The second and third shapes, in the same figure, are disconnected.
Connectedness is a topological (structural) property, since it is determined exclusively from the open parts of a shape relative to a particular topology. Quite like in topological spaces, we have that a shape $S$ is connected, if and only if, it has exactly two complemented open parts, namely, $S$ and \emph{0} (if there is another open part $C$ whose relative complement is open, then $S$ has a separation, namely, $C$ and $S$ - $C$).
Important differences between topological spaces and shapes are revealed when we focus on how appearance interacts with connectedness.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.83]{EmbeddedFigure23.pdf}
\caption{Three subspaces of the real line.}
\label{Figure23}
\end{figure}
Consider the three subspaces of the real line $\mathbf{R}$ in Figure 23: $X = [a, b]$, $Y = [a, c] \cup [d, b]$ and $Z = [a, e) \cup (e, b]$. All three subspaces \emph{inherit} an infinite topology from the underlying \lq standard\rq\;topology for $\mathbf{R}$. $X$ is an interval and, like all intervals of $\mathbf{R}$, it is connected \cite{Munkr00}. $Y$ is the set $X$ with the subset ($c$, $d$) removed. It is disconnected, because it is the union of two disjoint intervals which are open in the subspace topology for $Y$. $Z$ is the set $X$ with the single point $e$ removed. It is also disconnected, because it is the union of two intervals which are open in the subspace topology for $Z$ and neither contains a limit point of the other. Notice how we automatically went from a connected subspace ($X$) to disconnected subspaces ($Y$ and $Z$) by removing pieces.
With respect to the three subspaces, \emph{structure matches with appearance}: $X$ \lq looks connected\rq\;(and is structurally connected) because it comes as a single piece; $Y$ \lq looks disconnected\rq\;(and is structurally disconnected) because it is made of two separate pieces, and the same observation applies to $Z$, too (even though we can't really \lq look\rq\;at what remains after a single point is removed). Similar scenarios can be devised for other types of shapes, too, for example shapes made of arrangements of planes where each plane is considered as a subspace of $\mathbf{R}^2$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.83]{EmbeddedFigure24.pdf}
\caption{(a) A shape $S$ made of a single line and a shape $S'$ obtained by removing a part from $S$. (b) Three topologies for $S$. $S$ is topologically connected only with the first topology. (c) Three subshape topologies for $S'$ (treating $S'$ as a part of $S$). $S'$ is topologically disconnected with the second and third subshape topologies. }
\label{Figure24}
\end{figure}
Unlike (sub)spaces, shapes in algebras $U_i$ do not \lq lie\rq\;in an ambient space that has a predefined analysis into specific open parts (e.g. subspaces of $\mathbf{R}$ come with a predefined granularity). Shapes in $U_i$ are unanalyzed. Topologies are not inherited but \emph{assigned} through interpretations of their appearance. We can thus have shapes that \lq look connected\rq\;or that \lq look disconnected\rq\;induced with either connected or disconnected topological structures, in any number of different ways.
For example, suppose that subspace $X$ is represented as a line in algebra $U_1$, denoted with $S$, as in Figure 24a. As a single line, $S$ \lq looks connected\rq. Topologically, however, it can be either connected or disconnected, depending on the open parts recognized in it. Three different finite topologies for $S$ are in Figure 24b; $S$ is connected with the first, but disconnected with the second and third.
Suppose next that we remove a part from $S$ to obtain the shape $S'$ in Figure 24a (this is subspace $Y$ represented as two lines in algebra $U_1$). That $S'$ now \lq looks disconnected\rq\;---it comes in two separate pieces---is not an indication (or a consequence) of a disconnected topological structure. $S'$ is a new shape. It can inherit topologies from $S$ through subshape topology constructions, such as the ones in Figure 24c; in this case, $S'$ is still connected with the first subshape topology, and disconnected with the second and third. Alternatively, $S'$ can be induced with completely new topologies, each one with its own connectedness conditions.
Topologies given in previous sections of this paper illustrate it, too. The shape in Figure 14a, for example, \lq looks disconnected\rq\;because it comes in two pieces. It is topologically connected with respect to the topologies in Figure 16a and 16c, but topologically disconnected with respect to the topologies in Figure 15a and 16b.
The intuitive observation that a shape \lq looks connected\rq\;or \lq looks disconnected\rq\;can be expressed in a formally precise way through the concept of \emph{touching} for basic elements in algebras $U_i$, which is defined in \cite[pp. 171-173]{Stiny2006}. Using touching, we can state that a shape \emph{looks connected} (or \emph{is visually connected}) if it is made of basic elements of the same kind that touch\footnote{Touching is defined recursively: basic elements of the same kind \emph{touch} if there are basic elements embedded in each with boundary elements that touch. For example, pairs of lines touch when there are endpoints that do; pairs of planes touch when there are lines embedded in their boundaries with endpoints that do, and so on. One can classify the different ways in which basic elements touch and a demonstration is given in \cite{Stiny2006}.}; otherwise, it \emph{looks disconnected} (or \emph{is visually disconnected}). In Figure 25, shapes are classified according to whether they look connected or disconnected in this visual sense.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.83]{EmbeddedFigure25.pdf}
\caption{Examples of shapes that look connected and shapes that look disconnected.}
\label{Figure25}
\end{figure}
Visual connectedness is about appearance and is independent of structural connectedness. Structural connectedness is a topological property. It depends only on the topology induced on a shape, that is, on the open parts recognized in it (connectedness is not a property to determine just by looking at a shape). As a bottomline:
\vspace{12pt}
\centerline{\emph{Appearance is independent of structure}.}
\vspace{12pt}
\subsection{Further characterizations of connectedness for shapes}
The following result is similar to topological spaces. Let $S$ be a shape with topology $\mathcal{T}$. If a pair $C$, $D$ of open parts forms a separation of shape $S$, and a part $x$ is connected in its subshape topology $\mathcal{T}_x$, then $x$ is part of either $C$ or $D$.
There is a simple proof analogous to that for spaces, which I omit. Consider an example instead. Suppose $S$ is the line in Figure 24a and is induced with the topology in Figure 26a. The parts $C$ and $D$ in the figure form a separation of $S$. The subshape topology associated with a part $x$ of $S$ falls under the three different cases I show in Figure 26b. If $x$ extends over both $C$ and $D$, its subshape topology will be disconnected. It will be connected, however, if $x$ is entirely embedded in either $C$ or $D$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.83]{EmbeddedFigure26.pdf}
\caption{(a) A shape $S$, made of a single line, with disconnected topology. Open parts $C$ and $D$ form a separation for $S$. (b) The subshape topology for a part $x$ is connected when $x$ is entirely embedded in either $C$ or $D$.}
\label{Figure26}
\end{figure}
A shape $S$ can be connected with respect to a topology $\mathcal{T}$, but certain open parts of it may be disconnected in their associated subshape topologies. Hence, the following complementary notion.
Say that $S$ is \emph{locally connected at C}, if the open part $C$ is connected in its associated subshape topology $\mathcal{T}_C$. If $S$ is locally connected at every open part $C$, then $S$ is said to be \emph{locally connected}.
If a shape $S$ with topology $\mathcal{T}$ is locally connected, then it is connected (a trivial observation); the opposite, however, is obviously not true. For example, the shape in Figure 21a, with either one of the topologies in Figure 21c and 21d, is connected but is not locally connected at certain open parts.
It is reasonable to ask whether we can have a finite topology for a shape in which \emph{every nonempty open part} is disconnected in its associated subshape topology---a \lq totally disconnected\rq\;finite topology, so to speak. Since topologies for shapes are finite, and there are no points, open parts cannot be partitioned indefinitely in them. Ultimately, there must be certain \lq minimal\rq\;open parts which are connected in their subshape topologies, even if every other open part is disconnected. These minimal open parts are precisely the elements in a reduced basis. The following finitistic definition of a totally disconnected topology for a shape derives from this fact.
A shape $S$ with topology $\mathcal{T}$ is \emph{totally disconnected} if $S$ is disconnected and the only connected open parts in $\mathcal{T}$ are the basis elements in the reduced basis of $\mathcal{T}$.
An example of a topology that makes the shape in Figure 14a totally disconnected (in this finitistic sense), is shown in Figure 27. The reduced basis of this topology is the set of four nonempty open parts in the bottom row.
\begin{figure}[ht]
\centering
\includegraphics{EmbeddedFigure27.pdf}
\caption{A totally disconnected topology for the shape in Figure 14a.}
\label{Figure27}
\end{figure}
Let the shape $S$ with topology $\mathcal{T}$ be totally disconnected, and let $\mathcal{B}$ be the reduced basis of $\mathcal{T}$. The following are equivalent statements:
\begin{enumerate}
\item[(1)] $S$ is totally disconnected.
\item[(2)] $\mathcal{B}$ consists of disjoint nonempty parts only (excluding the \emph{empty shape}).
\item[(3)] $\mathcal{T}$ consists of closed-open parts only.
\item[(4)] $\mathcal{T}$ determines a finite Boolean algebra over the closed-open parts of $S$, with bottom element \emph{0} and top element $S$ itself.
\end{enumerate}
\begin{proof}
The proofs of implications (1) through (4) are almost automatic.
(1)$\implies$(2): Suppose for contradiction that $b_i$ and $b_j$ are two basis elements in $\mathcal{B}$ that are not disjoint. Then, there must be an open part $C$ formed as the sum of $b_i$ and $b_j$, which is connected. But we already know from assumption (1) that such an open part cannot exist.
(2)$\implies$(3): By definition, $S$ can be expressed as a sum of disjoint nonempty basis elements $b_1, ..., b_n$ of $\mathcal{B}$. Given $C$, an open part in $\mathcal{T}$, $C$ is either a basis element or a sum of basis elements, i.e. $C = b_1 + ... + b_k$ where $1 \leq k \leq n$. Then, rewrite $S = C + b_{k+1} + ... + b_n$, so that $S - C = b_{k+1} + ... + b_n$. The right-hand side of the latter equality is an open part in $\mathcal{T}$, by definition. Thus, we have shown that $C$ and its relative complement $S - C$ are open in $\mathcal{T}$, and therefore $C$ is closed-open.
(3)$\implies$(4): $\mathcal{T}$ is, by definition, a finite, distributive lattice, with bottom element \emph{0} and top element $S$. By assumption (3), every member of $\mathcal{T}$ is closed-open and thus comes with a relative complement. Hence, $\mathcal{T}$ is a Boolean algebra over the closed-open parts of $S$.
(4)$\implies$(1): From Stone duality, the finite Boolean algebra determined from assumption (4) corresponds to a totally disconnected space (a Stone space). The isolated points of this space are in bijection with the atoms of the Boolean algebra, and the closed-open subsets are formed by unions of those isolated points. As shown in \emph{Section 5}, this space is isomorphic to a finite shape topology.
\end{proof}
Finally, statement (2) given above implies we can form a totally disconnected topology for a shape from a basis of disjoint nonempty parts. In how many ways can this happen? Indefinitely many. Any shape can be decomposed into finitely many disjoint parts (that sum to the shape), in any number of different ways, each one resulting in a different totally disconnected topology (contrast this result with finite spaces, where totally disconnected topologies are essentially formed uniquely).
\section{Discussion}
\subsection{More topological concepts for shapes}
\emph{Sections 3} through \emph{7} are not exhaustive, but they cover a wide range of foundational topological concepts. Many other concepts are possible though. Here, I briefly go through some of them.
One important topic in topology that I left without mention are the so-called \emph{countability} and \emph{separation axioms}. Traditionally, these axioms reflect properties of the overall topological structure assigned to an object, and are more often used in the study of infinite objects as opposed to finite ones (consult \cite{Kurat72} and \cite{Munkr00} for the required background). The countability axioms depend on the coverings and the basis of a topology. Using the definitions given in this paper (\emph{Section 3}), one could mimic some of the countability axioms in the context of shapes in a straightforward way. For example, any shape, with any finite topology, is \emph{second-countable}: finite topologies for shapes are indeed generated by countable bases. This also implies, automatically, that every covering of a shape in a topology can be trivially interpreted as a countable subcovering of itself---a \emph{compactness} property.
The separation axioms depend on how members of a topology are \lq separated\rq \;by certain other members which are disjoint from one another. In general, they are defined in terms of points, and so a direct transfer to shapes is not always possible or intuitive. For example, the T$_0$ separation axiom is not really applicable to shapes in algebras $U_i$ when $i > 0$, because it refers to the separation of points. (It is applicable, however, to shapes made with points in algebra $U_0$ as discussed in \cite{Haridis2019}.) A way to formulate separation axioms, is to cast the axioms (when reasonably applicable) in terms of part relations between open parts. For example, the \emph{normal} and \emph{regular} separation axioms can be mimicked with part relations without trouble.
Other topics I have not discussed but are plausible to pursue for finite topologies on shapes, are the constructions of \emph{product topologies}, \emph{homeomorphisms}, \emph{neighborhood systems} and \emph{metrizability}. The definition of a continuous mapping in \emph{Section 6.2} can be used to obtain a working notion of a homeomorphism between two shapes. Then one could ask questions of the sort: Are a square and a triangle, as shapes in algebra $U_1$, always homeomorphic to each other? How about a square and a circle? The concept of a \lq neighborhood system\rq\;can be formulated through \emph{topological filters}. With respect to metrizability, under what conditions are finite topologies for shapes in algebras $U_i$ metrizable? Is there something meaningful to say about the relationship between metrizable topologies for shapes and the classical metric spaces we find in general topology? In light of a point-free framework for topology on shapes, one could also ask what happens with the classical concept of convexity, which is normally understood in point-set theoretical terms.
It would be also interesting to see if (and what) extensions are required to the material developed in this paper in order to accommodate topology on \emph{shapes made with weights} (e.g. colors, thicknesses, and so on) or shapes made with basic elements from \emph{composite algebras}. For example, how would a topology be defined when a shape is made up of both points and lines or both lines and planes? Last, it is also worth exploring infinite topologies for shapes in algebras $U_i$ where $i > 0$, to see what topological constructions of this paper transfer \lq smoothly\rq\;in the infinite case.
\subsection{Mathematics of shapes (in art and design)}
The abstract idea of \lq structure\rq\;is a recurring theme in the mathematical description and analysis of design objects in architecture, visual arts, city planning and other design areas. There are many ways to talk about structure and perhaps the only assumption common to all is a satisfactory answer to the question \lq What is it [the object] made of?\rq\;\cite{Runci69}. When asking about the structure of an object, it is important to have the following distinction in mind, which recapitulates the motivations for the work I presented in this paper.
If the object under consideration is analyzed, that is, when the elements it is composed of are given individually (e.g. as the points of a space), to determine its structure, one is led to restrict attention to the interconnections and interdependencies of the individual elements (for example, topology with finite sets of points, or with the vertices of a graph, falls under this category). What happens in the case where the object under consideration comes without a given, or apparent, set of individual elements? This is exactly the case of shapes in algebras $U_i$ when $i > 0$ (and, by analogy, the case of drawings, pictures, models, in art and design). Such shapes are inherently \emph{unanalyzed}; they are not only bereft of a point-set substrate, they also come with no definite subdivision into parts (their appearances are inherently ambiguous). For any given shape, to get the \lq elements upon which structure can be induced and thereafter studied mathematically\rq, an interpretative act must be introduced. This interpretative act may take place not singly but multiply: shapes (made of either lines, planes or solids) have finite definition, but are indefinitely interpretable into parts (\emph{Section 2}).
This distinction is important for another reason. Most work on structural mathematics in design, especially in the fields of architecture and planning, is about the study of relations on fixed and definite parts of a given object---the \lq analyzed approach\rq\;I describe just above. There are multiple examples from the literature; see \cite{Alexan64, AlexanPoyn67R3, Atkin74} and \cite{MarchStead74, Steadman83}. While it is a reasonable approach, it is perhaps only because no domain of mathematics that deals explicitly with structure (e.g. finite combinatorial topology, graph theory, combinatorics) has anything to do with appearance. It is thus hoped that this paper brings into awareness that the structure (topology in this case) of objects in art and design, can be indeed studied mathematically but yet in a way that retains their pictorial/spatial characteristics and incorporates their aesthetic, interpretative capacity (naturally ingrained in the way designers and artists work).
By \lq mathematics of shapes\rq\;then (algebra or topology for instance), we do not only mean the application of mathematics by injecting shapes with number or point-set theoretical properties and patterns. More importantly, we mean the mathematics that arise naturally when we take the pictorial/spatial characteristics of shapes as the starting point of a mathematical investigation.
\section*{Acknowledgement(s)}
This paper is published as a full article in the \emph{Journal of Mathematics and the Arts} (Taylor $\&$ Francis), available at: \url{https://doi.org/10.1080/17513472.2020.1723828}{}.
I wish to thank Professor George Stiny for his valuable feedback on earlier versions of this paper that improved the presentation of the material. Special thanks to Professor Terry Knight for her constructive suggestions as associate editor of the journal.
\section*{Declaration of interest statement}
No potential conflict of interest was reported by the authors. | {"config": "arxiv", "file": "1902.03974/interacttfssample.tex"} |
TITLE: Quillen groupoid of a groupoid.
QUESTION [1 upvotes]: For any category $\mathcal{C}$ we can define its Quillen's groupoid, denoted $\mathcal{Q}(\mathcal{C})$, as the category which have the same objects than $\mathcal{C}$ and the arrows between two objects are $\operatorname{Hom}_{\mathcal{Q}(\mathcal{C})}(c, c^\prime)=\frac{\operatorname{Path}_\mathcal{C}(c, c^\prime)}{\sim}$.
Where $\operatorname{Path}_{\mathcal{C}}(c, c^\prime)$ is the set of all paths between $c$ and $c^\prime$ (it means the splice of an arbitrary finite set of arrows without notice if are pointing to the right or to the left, but the first object must be $c$ and the last one $c^\prime$).
And the relation $\sim$ is the generated equivalence relation for this one:
two paths of gap length one are elementary homotopic if one is obtained from the other replacing a morphism occurring as one side of a commutative triangle by the others two sides, pointing in the appropiate directions.
The question is, if $\mathcal{C}$ is a groupoid (a category with all its arrows being isomorphisms) then $\mathcal{Q}(\mathcal{C})=\mathcal{C}$?
REPLY [1 votes]: The comments (which should have been posted as answers) contain more general statements. If $\mathcal{C}$ is a groupoid, then you can explicitly write down an equivalence $\mathcal{C} \simeq Q(\mathcal{C})$. There is always a canonical functor $\mathcal{C} \to Q(\mathcal{C})$ which is bijective on objects. So it remains to check that it is fully faithful when $\mathcal{C}$ is a groupoid. Fullness: If $c=c_0 \leftarrow c_1 \rightarrow c_2 \leftarrow \dotsc \rightarrow c_n = c'$ is a path from $c$ to $c'$, it is induced by the morphism $c = c_0 \to c_1 \to c_2 \dotsc \to c_n = c'$ in $\mathcal{C}$, where we have inverted the arrows $\leftarrow$ to get arrows $\rightarrow$. Faithfulness can be done by "induction" on the definition of the number of steps used in the generated equivalence relation. | {"set_name": "stack_exchange", "score": 1, "question_id": 1101005} |
TITLE: How to prove that the intersection of $L^1(\mathbb{R})$ and $L^2(\mathbb{R})$ is dense in $L^2(\mathbb{R})$
QUESTION [5 upvotes]: How to prove that the intersection of $L^1(\mathbb{R})$ space and $L^2(\mathbb{R})$ space is dense in $L^2(\mathbb{R})$ space?
REPLY [7 votes]: If $f \in L^2(\mathbb R)$, then define $f_n(x)$ by $f_n(x) = f(x)$ if $|x| < n$ and $f_n(x) = 0$ if $|x| \geq n$.
Then $f_n(x) \in L^1(\mathbb R) \cap L^2(\mathbb R)$ and $\lim_{n \rightarrow \infty} (f(x) - f_n(x)) = 0$ pointwise. So by the dominated convergence theorem
$$\lim_{n \rightarrow \infty} \int_{\mathbb R}|f-f_n|^2 = 0 \tag1$$
The dominating function here is just $|f|^2$, since $|f-f_n| \leq |f|$.
Equation $(1)$ is equivalent to saying $\lim_{n \rightarrow \infty} ||f-f_n||_{L^2}^2 = 0$, so we have that $\lim_{n \rightarrow \infty} ||f-f_n||_{L^2} = 0$ too.
So we conclude that $L^1(\mathbb R) \cap L^2(\mathbb R)$ is dense in $L^2(\mathbb R)$. | {"set_name": "stack_exchange", "score": 5, "question_id": 541576} |
\begin{document}
\title{\small{\textbf{STRONG-VISCOSITY SOLUTIONS: SEMILINEAR PARABOLIC PDEs\\ AND PATH-DEPENDENT PDEs}}}
\author{
\textsc{Andrea Cosso}\thanks{Laboratoire de Probabilit\'es et Mod\`eles Al\'eatoires, CNRS, UMR 7599, Universit\'e Paris Diderot, France, \sf cosso at math.univ-paris-diderot.fr}
\qquad\quad
\textsc{Francesco Russo}\thanks{Unit\'e de Math\'ematiques Appliqu\'ees, ENSTA ParisTech, Universit\'e Paris-Saclay,
828, boulevard des Mar\'echaux, F-91120 Palaiseau, France, \sf francesco.russo at ensta-paristech.fr}
}
\date{April 24, 2015}
\maketitle
\begin{abstract}
\noindent The aim of the present work is the introduction of a viscosity type solution, called \emph{strong-viscosity solution} to distinguish it from the classical one, with the following peculiarities: it is a purely analytic object; it can be easily adapted to more general equations than classical partial differential equations. First, we introduce the notion of strong-viscosity solution for semilinear parabolic partial differential equations, defining it, in a few words, as the pointwise limit of classical solutions to perturbed semilinear parabolic partial differential equations; we compare it with the standard definition of viscosity solution. Afterwards, we extend the concept of strong-viscosity solution to the case of semilinear parabolic path-dependent partial differential equations, providing an existence and uniqueness result.
\vspace{4mm}
\noindent {\bf Keywords:} strong-viscosity solutions; viscosity solutions; backward stochastic differential equations; path-dependent partial differential equations.
\vspace{4mm}
\noindent {\bf AMS 2010 subject classifications:} 35D40; 35R15; 60H10; 60H30.
\end{abstract}
\section{Introduction}
As it is well-known, viscosity solutions represent a cornerstone in the theory of Partial Differential Equations (PDEs) and their range of application is enormous, see the user's guide \cite{crandishiilions92}. Here, we just emphasize the important role they played in the study of semilinear parabolic
partial differential equations. We also emphasize the role of Backward Stochastic Differential Equation
(BSDEs), which constitute a probabilistic counterpart of viscosity solutions of
semilinear parabolic partial differential equation, see the seminal paper \cite{pardoux_peng92}.
\vspace{1mm}
The aim of the present work is the definition of a variant of viscosity type solution,
called \emph{strong-viscosity solution}
to distinguish it from the classical one. Compared to this latter, for several aspects it seems easier to handle and it can be easily adapted to a large class of equations.
\vspace{1mm}
In recent years, there has been a renewed interest in the study of generalized partial differential equations, motivated by the study of Markovian stochastic control problems with state variable living in an infinite dimensional space (see \cite{daprato_zabczyk14}) or path-dependent problems, for example, stochastic control problems with delay, see \cite{fabbrigozziswiech14}. The theory of backward stochastic differential equations is flexible enough to be extended to deal with both problems, see, e.g., \cite{fuhrman_pham13} and \cite{fuhrman_tessitore02}. From an analytic point of view, regarding infinite dimensional Markovian problems, there exists in general a corresponding partial differential equation in infinite dimension, and also the notion of viscosity solution has been extended to deal with this case, see \cite{crandall_lions91}, \cite{swiech94}, and \cite{fabbrigozziswiech14}. However, uniqueness for viscosity solutions revealed to be arduous to extend to the infinite dimensional setting and requires, in general, strong assumptions on the coefficients of the partial differential equation.
\vspace{1mm}
Concerning path-dependent problems, it is still not clear what should be the corresponding analytic characterization in terms of partial
differential equations, whose probabilistic counterpart is represented by the backward stochastic differential equation. A possible solution to
this problem is represented by the class of equations introduced in Chapter 9 of \cite{DGR} within the framework of Banach space valued calculus,
for which we refer also to \cite{flandoli_zanco13}. Alternatively, \cite{dupire} introduced the concept of Path-dependent Partial Differential
Equation (PPDE), which could do the job. Even if it is still not completely definite in the literature what a path-dependent partial differential
equation is (indeed, it mainly depends on the definition of functional derivatives adopted), the issue of providing a suitable definition of
viscosity solution for path-dependent partial differential equations has already attracted a great interest, see for example
\cite{ektz,etzI,etzII,peng12,rtz1,tangzhang13}, motivated by the fact that regular solutions to path-dependent PDEs in general
exist only under strong assumptions, see Remark \ref{R:MotivationSV}. We drive the attention in particular to the definition of viscosity
solution to path-dependent PDEs provided by \cite{ektz,etzI,etzII,rtz1}, where the classical minimum/maximum property, appearing in the standard
definition of viscosity solution, is replaced with an optimal stopping problem under nonlinear expectation~\cite{etzOptStop}.
Notice that probability plays an essential role in this latter definition, which can, more properly, be interpreted as a probabilistic version of
the standard definition of viscosity solution, rather than a purely analytic object; indeed, quite interestingly, the proof of the comparison
principle turns out to be nearly a ``translation'' into probabilistic terms of the classical proof of the comparison principle, see \cite{rtz1}.
We also emphasize that a similar notion of solution, called stochastic weak solution, has been introduced in the recent paper
\cite{leao_ohashi_simas14} in the context of variational inequalities for the Snell envelope associated to a non-Markovian continuous process $X$.
Those authors also revisit functional It\^o
calculus, making use of stopping times. This approach seems very promising.
\vspace{1mm}
The paper is organized as follows. First, in Section \ref{S:SV_Markov}, we develop the theory of strong-viscosity solutions in the finite dimensional
Markovian case, applying it to semilinear parabolic partial differential equations. A strong-viscosity supersolution (resp. subsolution) is defined,
in a few words, as the pointwise limit of classical supersolutions (resp. subsolutions) to perturbed semilinear parabolic PDEs.
A generalized strong-viscosity solution is both a strong-viscosity supersolution and a strong-viscosity subsolution. This definition is more in the
spirit of the standard definition of viscosity solution. We also introduce another definition, simply called strong-viscosity solution, which is
defined as the pointwise limit of classical solutions to perturbed semilinear parabolic PDEs. We notice that the definition of strong-viscosity
solution is similar in spirit to the vanishing
viscosity method, which represents one of the primitive ideas leading to the conception of the modern definition of viscosity solution and
justifies the term \emph{viscosity} in the name, which is also justified by the fact that a strong-viscosity solution is not assumed to be
differentiable.
Our definition is likewise inspired by the notion of strong solution (which explains the presence of the term \emph{strong} in the name), as defined for example in \cite{cerrai01}, \cite{gozzi_russo06a}, and \cite{gozzi_russo06b}, even though strong solutions are in general required to be more regular than viscosity type solutions. Finally, we observe that the notion of strong-viscosity solution has also some similarities with the concept of
\emph{good solution}, which turned out to be equivalent to the definition of $L^p$-viscosity solution for certain fully nonlinear partial differential equations, see, e.g., \cite{cerutti_escauriaza_fabes93}, \cite{crandall_kocan_soravia_swiech94}, \cite{jensen96}, and \cite{jensen_kocan_swiech02}.
\vspace{1mm}
We prove in Section \ref{S:SV_Markov}, Theorem \ref{T:RepresentationSuperSub}, that every strong-viscosity supersolution (resp. subsolution) can be represented in terms of a supersolution (resp. subsolution) to a backward stochastic differential equation. This in turn implies that a comparison principle (Corollary \ref{C:CompThm}) for strong-viscosity sub and supersolutions holds and follows from the comparison theorem for backward stochastic differential equations. In particular, the proof of the comparison principle is probabilistic and easier to extend to different contexts than the corresponding proof for classical viscosity solutions, which is based on real analysis' tools as Ishii's lemma and the doubling of variables technique. We conclude Section \ref{S:SV_Markov} providing two existence results (Theorem \ref{T:ExistSV_Markov} and Theorem \ref{T:ExistSV_Markov2}) for strong-viscosity solutions under quite general assumptions.
\vspace{1mm}
In Section \ref{S:SVPath} we extend the notion of strong-viscosity solution to the case of semilinear parabolic path-dependent partial differential equations, leaving to future research other possible extensions, e.g., the case of partial differential equations in infinite dimension. For PPDEs, as already said, a viscosity type solution, meant as a purely analytic object, is still missing, so we try to fill the gap. As previously noticed, the concept of path-dependent partial differential equation is still not definite in the literature and, in the present paper, we adopt the setting developed in the companion paper \cite{cosso_russo15a}. However, we notice that, if we had worked with the definition of functional derivatives and path-dependent partial differential equation used, e.g., in \cite{dupire,contfournie13}, the same results would hold without any change, but for notational ones,
see \cite{cosso_russo15a} for some insights on the link between these different settings. Let us explain the reasons why we adopt the definitions of \cite{cosso_russo15a}. First, in \cite{cosso_russo15a} the time and space variables $(t,\eta)\in[0,T]\times C([-T,0])$ play two distinct roles; moreover the space variable $\eta$ (i.e., the path) always represents the past trajectory of the process. This is in complete accordance with the literature on stochastic control problems with delay (see, e.g., \cite{cho} and \cite{fabbrigozziswiech14}), which is, for us, one of the main applications of path-dependent partial differential equations. On the contrary, in \cite{contfournie13} the time and space variables are strictly related to each other; moreover, the path represents the entire trajectory (past, present, and future) of the process, so that the notion of non-anticipative functional is required (see Definition 2.1 in \cite{contfournie13}).
\vspace{1mm}
We prove in Section \ref{S:SVPath}, Theorem \ref{T:UniqSV}, a uniqueness result for strong-viscosity solutions to path-dependent PDEs proceeding as in the finite dimensional Markovian case, i.e., by means of probabilistic methods based on the theory of backward stochastic differential equations. We also prove an existence result (Theorem \ref{T:ExistSV}) for strong-viscosity solutions in a more restrictive framework, which is based on the idea that a candidate solution to the path-dependent PDE is deduced from the corresponding backward stochastic differential equation. The existence proof consists in building a sequence of strict solutions (we prefer to use the term \emph{strict} in place of \emph{classical}, because even the notion of smooth solution can not be considered classical for path-dependent partial differential equations; indeed, all the theory is very recent) to perturbed path-dependent PDEs converging to our strong-viscosity solution. This regularization procedure is performed having in mind the following simple property: when the coefficients of the path-dependent partial differential equation are smooth enough the solution is smooth as well, i.e., the solution is strict. In the path-dependent case, smooth coefficients means \emph{cylindrical coefficients}, i.e., smooth maps of integrals of regular functions with respect to the path, as in the statement of Theorem \ref{T:ExistenceStrict}.
\vspace{1mm}
Finally, we defer some technical results to the Appendix. More precisely, we prove some basic estimates for path-dependent stochastic differential equations in Lemma \ref{L:AppendixX}. Then, we state a standard (but, to our knowledge, not at disposal in the literature) estimate for supersolutions to non-Markovian backward stochastic differential equations, see Proposition \ref{P:EstimateBSDEAppendix}. Afterwards, we prove the limit Theorem \ref{T:LimitThmBSDE} for supersolutions to backward stochastic differential equations, partly inspired by the monotonic limit theorem of Peng~\cite{peng00}, even if it is formulated under a different set of assumptions, for example, the monotonicity property is not assumed. We conclude the Appendix with a technical result, Lemma \ref{L:StabilityApp}, of real analysis.
\section{Strong-viscosity solutions in the Markovian case}
\label{S:SV_Markov}
In the present section we introduce the notion of strong-viscosity solution in the non-path-depen\-dent case, for the semilinear parabolic PDE
\begin{equation}
\label{KolmEq_Markov}
\begin{cases}
\partial_t u(t,x) + \langle b(t,x),D_x u(t,x)\rangle + \frac{1}{2}\textup{tr}(\sigma\sigma\trans(t,x)D_x^2 u(t,x)) & \\
\hspace{2.8cm}+\, f(t,x,u(t,x),\sigma\trans(t,x)D_x u(t,x)) \ = \ 0, &\forall\,(t,x)\in[0,T)\times\R^d, \\
u(T,x) \ = \ h(x), &\forall\,x\in\R^d,
\end{cases}
\end{equation}
where $b\colon[0,T]\times\R^d\rightarrow\R^d$, $\sigma\colon[0,T]\times\R^d\rightarrow\R^{d\times d}$, $f\colon[0,T]\times\R^d\times\R\times\R^d\rightarrow\R$, and $h\colon\R^d\rightarrow\R$ satisfy the following assumptions.
\vspace{3mm}
\noindent\textbf{(A0)} \hspace{3mm} $b$, $\sigma$, $f$, $h$ are Borel measurable functions satisfying, for some positive constants $C$ and $m$,
\begin{align*}
|b(t,x)-b(t,x')| + |\sigma(t,x)-\sigma(t,x')| \ &\leq \ C|x-x'|, \\
|f(t,x,y,z)-f(t,x,y',z')| \ &\leq \ C\big(|y-y'| + |z-z'|\big), \\
|b(t,0)| + |\sigma(t,0)| \ &\leq \ C, \\
|f(t,x,0,0)| + |h(x)| \ &\leq \ C\big(1 + |x|^m\big),
\end{align*}
for all $t\in[0,T]$, $x,x'\in\R^d$, $y,y'\in\R$, and $z,z'\in\R^d$.
\subsection{Notations}
\label{SubS:Notation}
We denote by $\R^{d\times d}$ the linear space of real matrices of order $d$. In the all paper, $|\cdot|$ denotes the absolute value of a real number or the usual Euclidean norm in $\R^d$ or the Frobenius norm in $\R^{d\times d}$.
\vspace{1mm}
We fix a complete probability space $(\Omega,\Fc,\P)$ on which a $d$-dimensional Brownian motion $W=(W_t)_{t\geq0}$ is defined. Let $\F=(\Fc_t)_{t\geq0}$ denote the completion of the natural filtration generated by $W$. We introduce the following spaces of stochastic processes.
\begin{itemize}
\item $\S^p(t,T)$, $p\geq1$, $t \leq T$, the set of real c\`adl\`ag adapted processes $Y=(Y_s)_{t\leq s\leq T}$ such that
\[
\|Y\|_{_{\S^p(t,T)}}^p := \ \E\Big[ \sup_{t\leq s\leq T} |Y_s|^p \Big] \ < \ \infty.
\]
\item $\H^p(t,T)^d$, $p$ $\geq$ $1$, $t \leq T$, the set of $\R^d$-valued predictable processes $Z=(Z_s)_{t\leq s\leq T}$ such that
\[
\|Z\|_{_{\H^p(t,T)^d}}^p := \ \E\bigg[\bigg(\int_t^T |Z_s|^2 ds\bigg)^{\frac{p}{2}}\bigg] \ < \ \infty.
\]
We simply write $\H^p(t,T)$ when $d=1$.
\item $\A^{+,2}(t,T)$, $t \leq T$, the set of real nondecreasing predictable processes $K$ $=$ $(K_s)_{t\leq s\leq T}\in\S^2(t,T)$ with $K_t$ $=$ $0$, so that
\[
\|K\|_{_{\S^2(t,T)}}^2 = \ \E\big[K_T^2\big].
\]
\item $\L^p(t,T;\R^{d'})$, $p\geq1$, $t \leq T$, the set of $\R^{d'}$-valued adapted processes $\varphi = (\varphi_s)_{t \leq s \leq T}$ such that
\[
\|\varphi\|_{_{\L^p(t,T;\R^{d'})}}^p := \ \E\bigg[\int_t^T |\varphi_s|^p ds\bigg] \ < \ \infty.
\]
\end{itemize}
We also consider, for every $(t,x)\in[0,T]\times\R^d$, the stochastic differential equation:
\begin{equation}
\label{SDE_Markov}
\begin{cases}
dX_s = \ b(s,X_s)dt + \sigma(s,X_s)dW_s, \qquad\qquad s\in[t,T], \\
X_t \ = \ x.
\end{cases}
\end{equation}
It is well-known (see, e.g., Theorem 14.23 in \cite{jacod79}) that, under Assumption {\bf (A0)}, there exists a unique $($up to indistinguishability$)$ $\F$-adapted continuous process $X^{t,x}=(X_s^{t,x})_{s\in[t,T]}$ strong solution to equation \eqref{SDE_Markov}.
\subsection{First definition of strong-viscosity solution}
We begin recalling the standard definition of classical solution.
\begin{Definition}
A function $u\colon[0,T]\times\R^d\rightarrow\R$ is called \textbf{classical solution} to equation \eqref{KolmEq_Markov} if $u\in C^{1,2}([0,T[\times\R^d)\cap C([0,T]\times\R^d)$ and solves \eqref{KolmEq_Markov}.
\end{Definition}
\noindent We state a uniqueness result for classical solutions.
\begin{Proposition}
\label{P:UniqStrictFiniteStandard}
Suppose that Assumption {\bf (A0)} holds. Let $u\colon[0,T]\times\R^d\rightarrow\R$ be a classical solution to equation \eqref{KolmEq_Markov}, satisfying the polynomial growth condition
\begin{equation}
\label{PolGrowthCondMarkov}
|u(t,x)| \ \leq \ C'\big(1 + |x|^{m'}\big), \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\end{equation}
for some positive constants $C'$ and $m'$. Then, the following Feynman-Kac
formula holds:
\begin{equation}
\label{Identification2}
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\end{equation}
where $(Y_s^{t,x},Z_s^{t,x})_{s\in[t,T]}=(u(s,X_s^{t,x}),\sigma\trans(s,X_s^{t,x})D_x u(s,X_s^{t,x})1_{[t,T[}(s))_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d$ is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\begin{equation}
\label{BSDE_Markov2}
Y_s^{t,x} \ = \ h(X_T^{t,x}) + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr - \int_s^T Z_r^{t,x} dW_r, \qquad t \leq s \leq T.
\end{equation}
In particular, there exists at most one classical solution to equation \eqref{KolmEq_Markov} satisfying a polynomial growth condition as in \eqref{PolGrowthCondMarkov}.
\end{Proposition}
\textbf{Proof.}
The proof is standard, even if we have not found an exact reference for it in the literature. We just give the main ideas. Fix $(t,x)\in[0,T[\times\R^d$ and set, for all $t \leq s \leq T$,
\[
Y_s^{t,x} \ = \ u(s,X_s^{t,x}), \qquad Z_s^{t,x} \ = \ D_x u(s,X_s^{t,x})1_{[t,T[}(s).
\]
Notice that identity \eqref{Identification2} holds taking $s=t$ in the first equation. Now, applying It\^o's formula to $u(s,X_s^{t,x})$ between $t$ and any $T_0\in[t,T[$, and using the fact that $u$ solves equation \eqref{KolmEq_Markov}, we see that \eqref{BSDE_Markov2} holds with $T_0$ in place of $T$. To conclude, it is enough to pass to the limit as $T_0\nearrow T$. This can be done using estimate \eqref{EstimateBSDE2} in Proposition \ref{P:EstimateBSDEAppendix} with $K\equiv0$. Finally, we notice that the present result is a slight generalization of Theorem 3.1 in \cite{pardoux_peng92} (see also Theorem 3.2 in \cite{peng91}), since $u\in C^{1,2}([0,T[\times\R^d)\cap C([0,T]\times\R^d)$ instead of $u\in C^{1,2}([0,T]\times\R^d)$.
\ep
\vspace{3mm}
\noindent We can now present our first definition of strong-viscosity solution to equation \eqref{KolmEq_Markov}.
\begin{Definition}
\label{D:ViscosityFinite}
A function $u\colon[0,T]\times\R^d\rightarrow\R$ is called a \textbf{strong-viscosity solution} to equation \eqref{KolmEq_Markov} if there exists a sequence $(u_n,h_n,f_n,b_n,\sigma_n)_n$ of Borel measurable functions $u_n\colon[0,T]\times\R^d\rightarrow\R$, $h_n\colon\R^d\rightarrow\R$, $f_n\colon[0,T]\times\R^d\times\R\times\R^d\rightarrow\R$, $b_n\colon[0,T]\times\R^d\rightarrow\R^d$, and $\sigma_n\colon[0,T]\times\R^d\rightarrow\R^{d\times d}$, such that the following holds.
\begin{enumerate}
\item[\textup{(i)}] For some positive constants $C$ and $m$,
\begin{align*}
|b_n(t,x)-b_n(t,x')| + |\sigma_n(t,x)-\sigma_n(t,x')| \ &\leq \ C|x-x'|, \notag \\
|f_n(t,x,y,z)-f_n(t,x,y',z')| \ &\leq \ C\big(|y-y'| + |z-z'|\big), \notag \\
|b_n(t,0)| + |\sigma_n(t,0)| \ &\leq \ C, \notag \\
|u_n(t,x)| + |h_n(x)| + |f_n(t,x,0,0)| \ &\leq \ C\big(1 + |x|^m\big),
\end{align*}
for all $t\in[0,T]$, $x,x'\in\R^d$, $y,y'\in\R$, and $z,z'\in\R^d$. Moreover, the functions $u_n(t,\cdot)$, $h_n(\cdot)$, $f_n(t,\cdot,\cdot,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
\item[\textup{(ii)}] $u_n$ is a classical solution to
\begin{equation}
\label{KolmEq_Markov_n}
\begin{cases}
\partial_t u_n(t,x) + \langle b_n(t,x),D_x u_n(t,x)\rangle + \frac{1}{2}\textup{tr}(\sigma_n\sigma_n\trans(t,x)D_x^2 u_n(t,x)) & \\
+\, f_n(t,x,u_n(t,x),\sigma_n\trans(t,x)D_x u_n(t,x)) \ = \ 0, &\hspace{-1.8cm}\forall\,(t,x)\in[0,T)\times\R^d, \\
u_n(T,x) \ = \ h_n(x), &\hspace{-1.8cm}\forall\,x\in\R^d.
\end{cases}
\end{equation}
\item[\textup{(iii)}] $(u_n,h_n,f_n,b_n,\sigma_n)$ converges pointwise to $(u,h,f,b,\sigma)$ as $n\rightarrow\infty$.
\end{enumerate}
\end{Definition}
\begin{Remark}
\label{R:UnifConvergence}
{\rm
(i) Notice that, for all $t\in[0,T]$, asking equicontinuity on compact sets of $(u_n(t,\cdot))_n$ together with its pointwise convergence to $u(t,\cdot)$ is equivalent to requiring the uniform convergence on compact sets of $(u_n(t,\cdot))_n$ to $u(t,\cdot)$. The same remark applies to $(h_n(\cdot))_n$ and $(f_n(t,\cdot,\cdot,\cdot))_n$.
\noindent(ii) In Definition \ref{D:ViscosityFinite} we do not assume {\bf (A0)} for the functions $b,\sigma,f,h$. However, we can easily see that they satisfy automatically {\bf (A0)} as a consequence of point (i) of Definition \ref{D:ViscosityFinite}.
\item(iii) We observe that a strong-viscosity solution to equation \eqref{KolmEq_Markov} in the sense of Definition \ref{D:ViscosityFinite} is a standard viscosity solution; for a definition we refer, e.g., to \cite{crandishiilions92}. Indeed, since a strong-viscosity solution $u$ to \eqref{KolmEq_Markov} is the limit of classical solutions (so, in particular, viscosity solutions) to perturbed equations, then from stability results for viscosity solutions (see, e.g., Lemma 6.1 and Remark 6.3 in \cite{crandishiilions92}), it follows that $u$ is a viscosity solution to equation \eqref{KolmEq_Markov}. On the other hand, if a strong-viscosity solution exists and a uniqueness result for viscosity solutions is in force, then a viscosity solution is a strong-viscosity solution, see also
Remark \ref{R:Ishii}.
\ep
}
\end{Remark}
\begin{Theorem}
\label{T:UniqSV_Markov}
Let Assumption {\bf (A0)} hold and let $u\colon[0,T]\times\R^d\rightarrow\R$ be a strong-viscosity solution to equation \eqref{KolmEq_Markov}. Then, the following Feynman-Kac formula holds
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
where $(Y_s^{t,x},Z_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d$, with $Y_s^{t,x}=u(s,X_s^{t,x})$, is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\begin{equation}
\label{BSDE_Markov}
Y_s^{t,x} \ = \ h(X_T^{t,x}) + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr - \int_s^T Z_r^{t,x} dW_r,
\end{equation}
for all $t \leq s \leq T$. In particular, there exists at most one strong-viscosity solution to equation \eqref{KolmEq_Markov}.
\end{Theorem}
Theorem \ref{T:UniqSV_Markov} will be proved in Section \ref{SubS:SecondDefnSV_Markov}, see Remark \ref{R:UniquenessSV_Markov}.
\subsection{Second definition of strong-viscosity solution}
\label{SubS:SecondDefnSV_Markov}
Our second definition of strong-viscosity solution to equation \eqref{KolmEq_Markov} is more in the spirit of the standard definition of viscosity solution, which is usually required to be both a viscosity subsolution and a viscosity supersolution. Indeed, we introduce the concept of \emph{generalized strong-viscosity solution}, which has to be both a strong-viscosity subsolution and a strong-viscosity supersolution. As it will be clear from the definition, this new notion of solution is more general (in other words, weaker), than the concept of strong-viscosity solution given earlier in Definition \ref{D:ViscosityFinite}. For this reason, we added the adjective \emph{generalized} to its name.
\vspace{3mm}
\noindent First, we introduce the standard notions of classical sub and supersolution.
\begin{Definition}
A function $u\colon[0,T]\times\R^d\rightarrow\R$ is called a \textbf{classical supersolution} $($resp. \textbf{classical subsolution}$)$ to equation \eqref{KolmEq_Markov} if $u\in C^{1,2}([0,T[\times\R^d)\cap C([0,T]\times\R^d)$ and solves
\[
\begin{cases}
\partial_t u(t,x) + \langle b(t,x),D_x u(t,x)\rangle + \frac{1}{2}\textup{tr}(\sigma\sigma\trans(t,x)D_x^2 u(t,x)) & \\
\hspace{1.1cm}+\, f(t,x,u(t,x),\sigma\trans(t,x)D_x u(t,x)) \ \geq \ (\text{resp. $\leq$}) \ 0, &\forall\,(t,x)\in[0,T)\times\R^d, \\
u(T,x) \ \geq \ (\text{resp. $\leq$}) \ h(x), &\forall\,x\in\R^d.
\end{cases}
\]
\end{Definition}
\noindent We state the following probabilistic representation result for classical sub and supersolutions.
\begin{Proposition}
\label{P:UniqStrictFinite}
Suppose that Assumption {\bf (A0)} holds.\\
\textup{(i)} Let $u\colon[0,T]\times\R^d\rightarrow\R$ be a classical supersolution to equation \eqref{KolmEq_Markov}, satisfying the polynomial growth condition
\[
|u(t,x)| \ \leq \ C'\big(1 + |x|^{m'}\big), \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some positive constants $C'$ and $m'$. Then, we have
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some uniquely determined $(Y_s^{t,x},Z_s^{t,x},K_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$, with $(Y_s^{t,x},$ $Z_s^{t,x})=(u(s,X_s^{t,x}),\sigma\trans(s,X_s^{t,x})D_x u(s,X_s^{t,x})1_{[t,T[}(s))$, solving the backward stochastic differential equation, $\P$-a.s.,
\[
Y_s^{t,x} \ = \ Y_T^{t,x} + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr + K_T^{t,x} - K_s^{t,x} - \int_s^T \langle Z_r^{t,x},dW_r\rangle, \quad t \leq s \leq T.
\]
\textup{(ii)} Let $u\colon[0,T]\times\R^d\rightarrow\R$ be a classical subsolution to equation \eqref{KolmEq_Markov}, satisfying the polynomial growth condition
\[
|u(t,x)| \ \leq \ C'\big(1 + |x|^{m'}\big), \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some positive constants $C'$ and $m'$. Then, we have
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some uniquely determined $(Y_s^{t,x},Z_s^{t,x},K_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$, with $(Y_s^{t,x},$ $Z_s^{t,x})=(u(s,X_s^{t,x}),\sigma\trans(s,X_s^{t,x})D_x u(s,X_s^{t,x})1_{[t,T[}(s))$, solving the backward stochastic differential equation, $\P$-a.s.,
\[
Y_s^{t,x} \ = \ Y_T^{t,x} + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr - (K_T^{t,x} - K_s^{t,x}) - \int_s^T \langle Z_r^{t,x},dW_r\rangle, \quad t \leq s \leq T.
\]
\end{Proposition}
\textbf{Proof.}
The proof can be done proceeding as in the proof of Proposition \ref{P:UniqStrictFiniteStandard}.
\ep
\vspace{3mm}
\noindent We can now provide the definition of generalized strong-viscosity solution.
\begin{Definition}
\label{D:StrongSuperSub}
A function $u\colon[0,T]\times\R^d\rightarrow\R$ is called a \textbf{strong-viscosity supersolution} $($resp. \textbf{strong-viscosity subsolution}$)$ to equation \eqref{KolmEq_Markov} if there exists a sequence $(u_n,h_n,f_n,b_n,\sigma_n)_n$ of Borel measurable functions $u_n\colon[0,T]\times\R^d\rightarrow\R$, $h_n\colon\R^d\rightarrow\R$, $f_n\colon[0,T]\times\R^d\times\R\times\R^d\rightarrow\R$, $b_n\colon[0,T]\times\R^d\rightarrow\R^d$, and $\sigma_n\colon[0,T]\times\R^d\rightarrow\R^{d\times d}$, such that the following holds.
\begin{enumerate}
\item[\textup{(i)}] For some positive constants $C$ and $m$,
\begin{align*}
|b_n(t,x)-b_n(t,x')| + |\sigma_n(t,x)-\sigma_n(t,x')| \ &\leq \ C|x-x'|, \\
|f_n(t,x,y,z)-f_n(t,x,y',z')| \ &\leq \ C\big(|y-y'| + |z-z'|\big), \\
|b_n(t,0)| + |\sigma_n(t,0)| \ &\leq \ C, \\
|u_n(t,x)| + |h_n(x)| + |f_n(t,x,0,0)| \ &\leq \ C\big(1 + |x|^m\big),
\end{align*}
for all $t\in[0,T]$, $x,x'\in\R^d$, $y,y'\in\R$, and $z,z'\in\R^d$. Moreover, the functions $u_n(t,\cdot)$, $h_n(\cdot)$, $f_n(t,\cdot,\cdot,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
\item[\textup{(ii)}] $u_n$ is a classical supersolution $($resp. classical subsolution$)$ to
\[
\begin{cases}
\partial_t u_n(t,x) + \langle b_n(t,x),D_x u_n(t,x)\rangle + \frac{1}{2}\textup{tr}(\sigma_n\sigma_n\trans(t,x)D_x^2 u_n(t,x)) & \\
\hspace{2.8cm}+\, f_n(t,x,u_n(t,x),\sigma_n\trans(t,x)D_x u_n(t,x)) \ = \ 0, &\!\!\!\!\!\!\!\forall\,(t,x)\in[0,T)\times\R^d, \\
u_n(T,x) \ = \ h_n(x), &\!\!\!\!\!\!\!\forall\,x\in\R^d.
\end{cases}
\]
\item[\textup{(iii)}] $(u_n,h_n,f_n,b_n,\sigma_n)$ converges pointwise to $(u,h,f,b,\sigma)$ as $n\rightarrow\infty$.
\end{enumerate}
A function $u\colon[0,T]\times\R^d\rightarrow\R$ is called a \textbf{generalized strong-viscosity solution} to equation \eqref{KolmEq_Markov} if it is both a strong-viscosity supersolution and a strong-viscosity subsolution to \eqref{KolmEq_Markov}.
\end{Definition}
We can now state the following probabilistic representation result for strong-viscosity sub and supersolutions, that is one of the main results of this paper, from which the comparison principle will follow in Corollary \ref{C:CompThm}.
\begin{Theorem}
\label{T:RepresentationSuperSub}
\textup{(1)} Let $u\colon[0,T]\times\R^d\rightarrow\R$ be a strong-viscosity supersolution to equation \eqref{KolmEq_Markov}. Then, we have
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some uniquely determined $(Y_s^{t,x},Z_s^{t,x},K_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$, with $Y_s^{t,x}=u(s,X_s^{t,x})$, solving the backward stochastic differential equation, $\P$-a.s.,
\begin{equation}
\label{E:BSDE_StrongNonlinear_Super}
Y_s^{t,x} \ = \ Y_T^{t,x} + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr + K_T^{t,x} - K_s^{t,x} - \int_s^T \langle Z_r^{t,x},dW_r\rangle, \qquad t \leq s \leq T.
\end{equation}
\textup{(2)} Let $u\colon[0,T]\times\R^d\rightarrow\R$ be a strong-viscosity subsolution to equation \eqref{KolmEq_Markov}. Then, we have
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
for some uniquely determined $(Y_s^{t,x},Z_s^{t,x},K_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$, with $Y_s^{t,x}=u(s,X_s^{t,x})$, solving the backward stochastic differential equation, $\P$-a.s.,
\begin{equation}
\label{E:BSDE_StrongNonlinear_Sub}
Y_s^{t,x} \ = \ Y_T^{t,x} + \int_s^T f(r,X_r^{t,x},Y_r^{t,x},Z_r^{t,x}) dr - \big(K_T^{t,x} - K_s^{t,x}\big) - \int_s^T \langle Z_r^{t,x},dW_r\rangle, \qquad t \leq s \leq T.
\end{equation}
\end{Theorem}
\textbf{Proof.}
We shall only prove statement (1), since (2) can be established similarly. To prove (1), consider a sequence $(u_n,h_n,f_n,b_n,\sigma_n)_n$ satisfying conditions (i)-(iii) of Definition \ref{D:StrongSuperSub}. For every $n\in\N$ and any $(t,x)\in[0,T]\times\R^d$, consider the stochastic equation, $\P$-a.s.,
\[
X_s \ = \ x + \int_t^s b_n(r,X_r) dr + \int_t^s \sigma_n(r,X_r) dW_r, \qquad t \leq s \leq T.
\]
It is well-known that there exists a unique solution $(X_s^{n,t,x})_{s\in[t,T]}$ to the above equation. Moreover, from Proposition \ref{P:UniqStrictFinite} we know that $u_n(t,x) = Y_t^{n,t,x}$, $(t,x)\in[0,T]\times\R^d$, for some $(Y_s^{n,t,x},Z_s^{n,t,x},$ $K_s^{n,t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$ solving the backward stochastic differential equation, $\P$-a.s.,
\[
Y_s^{n,t,x} \ = \ Y_T^{n,t,x} + \int_s^T f_n(r,X_r^{n,t,x},Y_r^{n,t,x},Z_r^{n,t,x}) dr + K_T^{n,t,x} - K_s^{n,t,x} - \int_s^T \langle Z_r^{n,t,x}, dW_r\rangle, \quad t \leq s \leq T.
\]
Notice that, from the uniform polynomial growth condition of $(u_n)_n$ and estimate \eqref{EstimateSupXn} in Lemma \ref{L:AppendixX} (for the particular case when $b_n$ and $\sigma_n$ only depend on the current value of path, rather than on all its past trajectory) we have, for any $p\geq1$,
\[
\sup_{n\in\N} \|Y^{n,t,x}\|_{\S^p(t,T)} \ < \ \infty.
\]
Then, it follows from Proposition \ref{P:EstimateBSDEAppendix}, the polynomial growth condition of $(f_n)_n$ in $x$, and the linear growth condition of $(f_n)_n$ in $(y,z)$, that
\[
\sup_n\big(\|Z^{n,t,x}\|_{\H^2(t,T)^d} + \|K^{n,t,x}\|_{\S^2(t,T)}\big) \ < \ \infty.
\]
Set $Y_s^{t,x}=u(s,X_s^{t,x})$, for any $s\in[t,T]$. Then, from the polynomial growth condition that $u$ inherits from the sequence $(u_n)_n$, and using estimate \eqref{EstimateSupXn} in Lemma \ref{L:AppendixX} (for the particular case of non-path-dependent $b_n$ and $\sigma_n$), we deduce that $\|Y^{t,x}\|_{\S^p(t,T)}<\infty$, for any $p\geq1$. In particular, $Y\in\S^2(t,T)$ and it is a continuous process. We also have, using
the convergence result \eqref{limXn-X} in Lemma \ref{L:AppendixX} (for the particular case of non-path-dependent $b_n$ and $\sigma_n$), that there exists a subsequence of $(X^{n,t,x})_n$, which we still denote $(X^{n,t,x})_n$, such that
\begin{equation}
\label{E:supX^n-X-->0}
\sup_{t\leq s\leq T}|X_s^{n,t,x}(\omega)-X_s^{t,x}(\omega)| \ \overset{n\rightarrow\infty}{\longrightarrow} \ 0, \qquad \forall\,\omega\in\Omega\backslash N,
\end{equation}
for some null measurable set $N\subset\Omega$. Moreover, from estimate \eqref{EstimateSupXn} in Lemma \ref{L:AppendixX} (for the particular case of non-path-dependent $b_n$ and $\sigma_n$) it follows that, possibly enlarging $N$, $\sup_{t\leq s\leq T}(|X_s^{t,x}(\omega)|$ $+$ $|X_s^{n,t,x}(\omega)|)<\infty$, for any $n\in\N$ and any $\omega\in\Omega\backslash N$. Now, fix
$\omega\in\Omega\backslash N$; then
\begin{align*}
|Y_s^{n,t,x}(\omega)-Y_s^{t,x}(\omega)| \ &= \ |u_n(s,X_s^{n,t,x}(\omega))-u(s,X_s^{t,x}(\omega))| \\
&= \ |u_n(s,X_s^{n,t,x}(\omega))-u_n(s,X_s^{t,x}(\omega))| + |u_n(s,X_s^{t,x}(\omega))-u(s,X_s^{t,x}(\omega))|.
\end{align*}
For any $\eps>0$, from point (iii) of Definition \ref{D:StrongSuperSub} it follows that there exists $n'\in\N$ such that
\[
|u_n(s,X_s^{t,x}(\omega))-u(s,X_s^{t,x}(\omega))| \ < \ \frac{\eps}{2}, \qquad \forall\,n\geq n'.
\]
On the other hand, from the equicontinuity on compact sets of $(u_n)_n$, we see that there exists $\delta>0$, independent of $n$, such that
\[
|u_n(s,X_s^{n,t,x}(\omega))-u_n(s,X_s^{t,x}(\omega))| \ < \ \frac{\eps}{2}, \qquad \text{if }|X_s^{n,t,x}(\omega)-X_s^{t,x}(\omega)|<\delta.
\]
Using \eqref{E:supX^n-X-->0}, we can find $n''\in\N$, $n''\geq n'$, such that
\[
\sup_{t\leq s\leq T}|X_s^{n,t,x}(\omega)-X_s^{t,x}(\omega)| \ < \ \delta, \qquad \forall\,n\geq n''.
\]
In conclusion, for any $\omega\in\Omega\backslash N$ and any $\eps>0$ there exists $n''\in\N$ such that
\[
|Y_s^{n,t,x}(\omega)-Y_s^{t,x}(\omega)| \ < \ \eps, \qquad \forall\,n\geq n''.
\]
Therefore, $Y_s^{n,t,x}(\omega)$ converges to $Y_s^{t,x}(\omega)$, as $n$ tends to infinity, for any $(s,\omega)\in[t,T]\times(\Omega\backslash N)$. In a similar way, we can prove that there exists a null measurable set $N'\subset\Omega$ such that $f_n(s,X_s^{n,t,x}(\omega),y,$ $z)\rightarrow f(s,X_s^{t,x}(\omega),y,z)$, for any $(s,\omega,y,z)\in[t,T]\times(\Omega\backslash N')\times\R\times\R^d$. As a consequence, the claim follows from Theorem \ref{T:LimitThmBSDE}.
\ep
\vspace{3mm}
We can finally state a comparison principle for strong-viscosity sub and supersolutions, which follows directly from the comparison theorem for BSDEs.
\begin{Corollary}[Comparison principle]
\label{C:CompThm}
Let $\check u\colon[0,T]\times\R^d\rightarrow\R$ $($resp. $\hat u\colon[0,T]\times\R^d\rightarrow\R$$)$ be a strong-viscosity subsolution $($resp. strong-viscosity supersolution$)$ to equation \eqref{KolmEq_Markov}. Then $\check u \leq \hat u$ on $[0,T]\times\R^d$. In particular, there exists at most one generalized strong-viscosity solution to equation \eqref{KolmEq_Markov}.
\end{Corollary}
\begin{Remark}
\label{R:UniquenessSV_Markov}
{\rm
Notice that Theorem \ref{T:UniqSV_Markov} follows from Corollary \ref{C:CompThm}, since a strong-viscosity solution (Definition \ref{D:ViscosityFinite}) is in particular a generalized strong-viscosity solution.
\ep
}
\end{Remark}
\textbf{Proof.}
We know that $\check u(T,x) \leq g(x) \leq \hat u(T,x)$, for all $x\in\R^d$. Moreover, from Theorem \ref{T:RepresentationSuperSub} we have
\[
\check u(t,x) \ = \ \check Y_t^{t,x}, \qquad \hat u(t,x) \ = \ \hat Y_t^{t,x}, \qquad \text{for all }(t,x)\in[0,T]\times\R^d,
\]
for some $(\check Y_s^{t,x},\check Z_s^{t,x},\check K_s^{t,x})_{s\in[t,T]},(\hat Y_s^{t,x},\hat Z_s^{t,x},\hat K_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$ satisfying \eqref{E:BSDE_StrongNonlinear_Sub} and \eqref{E:BSDE_StrongNonlinear_Super}, respectively. Then, the result follows from a direct application of the comparison theorem for backward stochastic differential equations, see, e.g., Theorem 1.3 in \cite{peng00}.
\ep
\vspace{3mm}
Now, we present two existence results for strong-viscosity solutions to equation \eqref{KolmEq_Markov}.
\begin{Theorem}
\label{T:ExistSV_Markov}
Let Assumption {\bf (A0)} hold and suppose that $b=b(x)$ and $\sigma=\sigma(x)$ do not depend on $t$. Suppose also that the functions $f$ and $h$ are continuous. Then, the function $u$ given by
\begin{equation}
\label{Identification}
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\end{equation}
where $(Y_s^{t,x},Z_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d$ is the unique solution to \eqref{BSDE_Markov}, is a strong-viscosity solution to equation \eqref{KolmEq_Markov}.
\end{Theorem}
\begin{Remark}
\label{R:Ishii}
{\rm
Under the assumptions of Theorem \ref{T:ExistSV_Markov} it follows from Theorem 7.4 in \cite{ishii89} that a uniqueness result for standard viscosity solutions to equation \eqref{KolmEq_Markov} holds. Moreover, since the seminal paper \cite{pardoux_peng92}, we know that the unique viscosity solution is given by formula \eqref{Identification}, therefore it coincides with the strong-viscosity solution.
\ep
}
\end{Remark}
\textbf{Proof (of Theorem \ref{T:ExistSV_Markov}).}
Let us fix some notations. Let $q\in\N\backslash\{0\}$ and consider the function $\phi_q\in C^\infty(\R^q)$ given by
\[
\phi_q(w) \ = \ c \exp\bigg(\frac{1}{|w|^2 - 1}\bigg) 1_{\{|w|<1\}}, \qquad \forall\,w\in\R^q,
\]
with $c>0$ such that $\int_{\R^q} \phi_q(w) dw = 1$. Then, we define $\phi_{q,n}(w) = n^q\phi_q(nw)$, $\forall\,w\in\R^q$, $n\in\N$. Let us now define, for any $n\in\N$,
\begin{align*}
b_n(x) \ &= \ \int_{\R^d} \phi_{d,n}(x') b(x - x') dx', \qquad \sigma_n(x) \ = \ \int_{\R^d} \phi_{d,n}(x') \sigma(x - x') dx', \\
f_n(t,x,y,z) \ &= \ \int_{\R^d\times\R\times\R^d} \phi_{2d+1,n}(x',y',z') f(t,x-x',y-y',z-z') dx'dy'dz', \\
h_n(x) \ &= \ \int_{\R^d} \phi_{d,n}(x') h(x - x') dx',
\end{align*}
for all $(t,x,y,z)\in[0,T]\times\R^d\times\R\times\R^d$. Then, we see that the sequence of continuous functions $(b_n,\sigma_n,f_n,h_n)_n$ satisfies assumptions (i) and (iii) of Definition \ref{D:ViscosityFinite}. Moreover, for any $n\in\N$
we have the following.
\begin{itemize}
\item $b_n$ and $\sigma_n$ are of class $C^3$ with partial derivatives from order $1$ up to order $3$ bounded.
\item For all $t\in[0,T]$, $f_n(t,\cdot,\cdot,\cdot)\in C^3(\R^d\times\R\times\R^d)$ and the two properties below.
\begin{itemize}
\item $f_n(t,\cdot,0,0)$ belongs to $C^3$ and its third order partial derivatives satisfy a polynomial growth condition uniformly in $t$.
\item $D_y f_n$, $D_z f_n$ are bounded on $[0,T]\times\R^d\times\R\times\R^d$, as well as their derivatives of order one and second with respect to $x,y,z$.
\end{itemize}
\item $h_n\in C^3(\R^n)$ and its third order partial derivatives satisfy a polynomial growth condition.
\end{itemize}
Therefore, it follows from Theorem 3.2 in \cite{pardoux_peng92} that a classical solution to equation \eqref{KolmEq_Markov_n} is given by
\begin{equation}
\label{Identification_un}
u_n(t,x) \ = \ Y_t^{n,t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\end{equation}
where $(Y_s^{n,t,x},Z_s^{n,t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d$ is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\[
Y_s^{n,t,x} \ = \ h_n(X_T^{n,t,x}) + \int_s^T f_n(r,X_r^{n,t,x},Y_r^{n,t,x},Z_r^{n,t,x}) dr - \int_s^T Z_r^{n,t,x} dW_r, \qquad t \leq s \leq T,
\]
with
\[
X_s^{n,t,x} \ = \ x + \int_t^s b_n(r,X_r^{n,t,x}) dr + \int_t^s \sigma_n(r,X_r^{n,t,x}) dW_r, \qquad t \leq s \leq T.
\]
From \eqref{Identification_un}, Proposition \ref{P:EstimateBSDEAppendix}, and estimate \eqref{EstimateSupXn}, we see that $u_n$ satisfies a polynomial growth condition uniform in $n$. It remains to prove that the sequence $(u_n)_n$ converges pointwise to $u$ as $n\rightarrow\infty$, and that the functions $u_n(t,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$. Concerning this latter property, fix $t\in[0,T]$, a compact subset $K\subset\R^d$, and $\eps>0$. We have to prove that there exists $\delta=\delta(\eps,K)$ such that
\begin{equation}
\label{UniformContinuity_un}
|u_n(t,x) - u_n(t,x')| \ \leq \ \eps, \qquad \text{if }|x-x'|\leq\delta,\,x,x'\in K.
\end{equation}
To this end, we begin noting that from estimate \eqref{EstimateBSDE2} we have that there exists a constant $C$, independent of $n$, such that
\begin{align*}
|u_n(t,x) - u_n(t,x')|^2 \ &\leq \ C\,\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2\big] \\
&\quad \ + C\int_t^T \E\big[\big|f_n(s,X_s^{n,t,x},Y_s^{n,t,x},Z_s^{n,t,x}) - f_n(s,X_s^{n,t,x'},Y_s^{n,t,x},Z_s^{n,t,x})\big|^2\big] ds, \notag
\end{align*}
for all $t\in[0,T]$ and $x,x'\in\R^d$. In order to prove \eqref{UniformContinuity_un}, we also recall the following standard estimate: for any $p\geq2$ there exists a positive constant $C_p$, independent of $n$, such that
\[
\E\big[\big|X_s^{n,t,x} - X_s^{n,t,x'}\big|^p\big] \ \leq \ C_p |x - x'|^p,
\]
for all $t\in[0,T]$, $s\in[t,T]$, $x,x'\in\R^d$, $n\in\N$. Now, choose $p>d$, $R>0$, and $\alpha\in]0,p-d[$. Then, it follows from Garsia-Rodemich-Rumsey lemma (see, in particular, formula (3a.2) in \cite{barlow_yor82}) that, for all $t\in[0,T]$, $s\in[t,T]$, $x,x'\in\R^d$, $n\in\N$,
\begin{equation}
\label{UniformContinuityX}
|X_s^{n,t,x} - X_s^{n,t,x'}| \ \leq \ (\Gamma_s^{n,t})^{1/p} |x - x'|^{\alpha/p},
\end{equation}
for some process $\Gamma^{n,t}=(\Gamma_s^{n,t})_{s\in[t,T]}$ given by
\[
\Gamma_s^{n,t} \ = \ C_d\,8^p\,2^\alpha\bigg(1 + \frac{2d}{\alpha}\bigg)\int_{\{(y,y')\in\R^{2d}\colon|y|,|y'|\leq R\}} \frac{|X_s^{n,t,y} - X_s^{n,t,y'}|}{|y - y'|^{\alpha + 2d}} dydy'
\]
and
\begin{equation}
\label{ExpectedValueGamma}
\E\big[\Gamma_s^{n,t}\big] \ \leq \ C_d\,C_p \frac{1}{(p - d) - \alpha} R^{p - \alpha},
\end{equation}
where $C_d$ is a universal constant depending only on $d$.
Now, let us prove that
\begin{equation}
\label{UniformContinuity_hn}
\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2\big] \ \leq \ \eps, \qquad \text{if }|x-x'|\leq\delta,\,x,x'\in K.
\end{equation}
Let $x,x'\in K$ and let $m$ be a strictly positive integer to be chosen later. Then, consider the event (we omit the dependence on $t,x$)
\[
\Omega_{n,m} \ = \ \big\{\omega\in\Omega\colon\Gamma_T^{n,t}(\omega)\leq m,\,|X_T^{n,t,x}(\omega)|\leq m\big\}.
\]
From \eqref{UniformContinuityX} we see that, on $\Omega_{n,m}$, $X_T^{n,t,x'}$ is also uniformly bounded by a constant independent of $n,t,x,x'$, since $x,x'\in K$. In particular, from the equicontinuity on compact sets of the sequence $(h_n)_n$, it follows that there exists a continuity modulus $\rho$ (depending on $K$, but independent of $n$) such that
\begin{align*}
\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2\big] \ &\leq \ \E\big[\rho^2(|X_T^{n,t,x} - X_T^{n,t,x'}|) 1_{\Omega_{n,m}}\big] \\
&\quad \ + \E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2 1_{\Omega_{n,m}^c}\big].
\end{align*}
By \eqref{UniformContinuityX} and Cauchy-Schwarz inequality
\begin{align*}
\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2\big] \ &\leq \ \rho^2\big(m^{1/p}|x - x'|^{\alpha/p}\big) \\
&+ \sqrt{\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^4\big]} \sqrt{\P(\Gamma_T^{n,t}>m) + \P(X_T^{n,t,x}>m)}.
\end{align*}
From the standard inequalities $|a-b|^4\leq8(a^4+b^4)$, $\forall\,a,b\in\R$, and $\sqrt{c+d}\leq\sqrt{c}+\sqrt{d}$, $\forall\,c,d\geq0$, we see that
\[
\sqrt{\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^4\big]} \ \leq \ \sqrt{8\E\big[\big|h_n(X_T^{n,t,x})\big|^4\big]} + \sqrt{8\E\big[\big|h_n(X_T^{n,t,x'})\big|^4\big]}.
\]
Now, using this estimate, the polynomial growth condition of $h_n$ (uniform in $n$), estimate \eqref{EstimateSupXn}, estimate \eqref{ExpectedValueGamma}, and Chebyshev's inequality, we obtain
\begin{align*}
\sqrt{\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^4\big]} \ &\leq \ C_K, \\
\P(\Gamma_T^{n,t}>m) \ &\leq \ \frac{\E\big[\Gamma_T^{n,t}\big]}{m} \ \leq \ \frac{C_K}{m}, \\
\P(X_T^{n,t,n}>m) \ &\leq \ \frac{\E\big[|X_T^{n,t,x}|\big]}{m} \ \leq \ \frac{C_K}{m},
\end{align*}
for some positive constant $C_K$, possibly depending on $K$ (in particular, on $x$ and $x'$), but independent of $n,t$. Therefore, we see that we can find $m=m(\eps,K)$ large enough such that
\[
\E\big[\big|h_n(X_T^{n,t,x}) - h_n(X_T^{n,t,x'})\big|^2\big] \ \leq \ \rho^2\big(m^{1/p}|x - x'|^{\alpha/p}\big) + \frac{\eps}{2}.
\]
Then, there exists $\delta=\delta(\eps,K)>0$ such that \eqref{UniformContinuity_hn} holds. In a similar way we can prove that, possibly taking a smaller $\delta=\delta(\eps,K)>0$, we have
\begin{equation}
\label{UniformContinuity_fn}
\E\big[\big|f_n(s,X_s^{n,t,x},Y_s^{n,t,x},Z_s^{n,t,x}) - f_n(s,X_s^{n,t,x'},Y_s^{n,t,x},Z_s^{n,t,x})\big|^2\big] \ \leq \ \eps,
\end{equation}
if $|x-x'|\leq\delta$, $x,x'\in K$, $\forall\,s\in[t,T]$. By \eqref{UniformContinuity_hn} and \eqref{UniformContinuity_fn} we deduce the validity of \eqref{UniformContinuity_un}.
Finally, let us prove the pointwise convergence of the sequence $(u_n)_n$ to $u$. Using again estimate \eqref{EstimateBSDE2}, we find
\begin{align}
|u_n(t,x) - u(t,x)|^2 \ &\leq \ C\,\E\big[\big|h_n(X_T^{n,t,x}) - h(X_T^{t,x})\big|^2\big] \label{unx-ux} \\
&\quad \ + C\int_t^T \E\big[\big|f_n(s,X_s^{n,t,x},Y_s^{t,x},Z_s^{t,x}) - f(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x})\big|^2\big] ds, \notag
\end{align}
$\forall\,(t,x)\in[0,T]\times\R^d$, $n\in\N$, for some constant $C$, independent of $n$ and depending only on the (uniform in $n$) Lipschitz constant of $f_n$ with respect to $(y,z)$. By the uniform convergence on compact sets of $(h_n(\cdot),f_n(t,\cdot,y,z))_n$ to $(h(\cdot),f(t,\cdot,y,z))$, we have, $\P$-a.s.,
\begin{align}
h_n(X_T^{n,t,x}) \ &\overset{n\rightarrow\infty}{\longrightarrow} \ h(X_T^{t,x}), \label{Conv_h_n} \\
f_n(s,X_s^{n,t,x},Y_s^{t,x},Z_s^{t,x}) \ &\overset{n\rightarrow\infty}{\longrightarrow} \ f(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x}), \label{Conv_f_n}
\end{align}
for all $s\in[t,T]$. By Assumption \textbf{(A0)} and the polynomial growth condition of $h_n$, $f_n$, $u_n$ (uniform in $n$), estimates \eqref{EstimateSupX} and \eqref{EstimateSupXn}, Proposition \ref{P:EstimateBSDEAppendix}, we can prove the uniform integrability of the sequences $(|h_n(X_T^{n,t,x}) - h(X_T^{t,x})|^2)_n$ and $(|f_n(s,X_s^{n,t,x},Y_s^{t,x},Z_s^{t,x}) - f(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x})|^2)_n$, $\forall\,s\in[t,T]$. This, together with \eqref{Conv_h_n}-\eqref{Conv_f_n}, implies that
\begin{align*}
\E\big[\big|h_n(X_T^{n,t,x}) - h(X_T^{t,x})\big|^2\big] \ &\overset{n\rightarrow\infty}{\longrightarrow} \ 0, \\
\E\big[\big|f_n(s,X_s^{n,t,x},Y_s^{t,x},Z_s^{t,x}) - f(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x})\big|^2\big] \ &\overset{n\rightarrow\infty}{\longrightarrow} \ 0,
\end{align*}
for all $s\in[t,T]$. From the second convergence, the polynomial growth condition of $f$ and $f_n$ (uniform in $n$), estimates \eqref{EstimateSupX} and \eqref{EstimateSupXn}, it follows that
\[
\lim_{n\rightarrow\infty} \int_t^T \E\big[\big|f_n(s,X_s^{n,t,x},Y_s^{t,x},Z_s^{t,x}) - f(s,X_s^{t,x},Y_s^{t,x},Z_s^{t,x})\big|^2\big] ds \ = \ 0.
\]
In conclusion, we can pass to the limit in \eqref{unx-ux} as $n\rightarrow\infty$, and we obtain the pointwise convergence of $(u_n)_n$ to $u$.
\ep
\begin{Remark}
\label{R:TwoDefinitions}
{\rm
Notice that Theorem \ref{T:ExistSV_Markov} gives an existence result for strong-viscosity solutions (see Definition \ref{D:ViscosityFinite}) to equation \eqref{KolmEq_Markov}, which implies an existence result for generalized strong-viscosity solutions (see Definition \ref{D:StrongSuperSub}). In Section \ref{S:SVPath} we will consider only Definition \ref{D:ViscosityFinite} and extend it to the path-dependent case.
\ep
}
\end{Remark}
We conclude this section providing another existence result for strong-viscosity solutions to equation \eqref{KolmEq_Markov} under a different set of assumptions with respect to Theorem \ref{T:ExistSV_Markov}. In particular, $f=f(t,x)$ does not depend on $(y,z)$, while $b$ and $\sigma$ can depend on $t$.
\begin{Theorem}
\label{T:ExistSV_Markov2}
Let Assumption {\bf (A0)} hold and suppose that $f=f(t,x)$ does not depend on $(y,z)$. Suppose also that the functions $f$ and $h$ are continuous. Then, the function $u$ given by
\[
u(t,x) \ = \ Y_t^{t,x}, \qquad \forall\,(t,x)\in[0,T]\times\R^d,
\]
where $(Y_s^{t,x},Z_s^{t,x})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)^d$ is the unique solution to \eqref{BSDE_Markov}, is a strong-viscosity solution to equation \eqref{KolmEq_Markov}.
\end{Theorem}
\textbf{Proof.}
The proof can done proceeding as in the proof of Theorem \ref{T:ExistSV_Markov}, smoothing the coefficients, but using Theorem 6.1, Chapter 5, in \cite{friedman75vol1} instead of Theorem 3.2 in \cite{pardoux_peng92}.
\ep
\section{Strong-viscosity solutions in the path-dependent case}
\label{S:SVPath}
One of the goals of the present section is to show that the notion of strong-viscosity solution is very flexible and easy to extend, with respect to the standard notion of viscosity solution, to more general settings than the Markovian one. In particular, we focus on semilinear parabolic path-dependent PDEs.
\subsection{Semilinear parabolic path-dependent PDEs}
\label{Ss:PPDE}
Let us denote by $C([-T,0])$ the Banach space of all continuous paths $\eta\colon[-T,0]\rightarrow\R$ endowed with the supremum norm $\|\eta\|=\sup_{t\in[-T,0]}|\eta(t)|$. Let us consider the following semilinear parabolic path-dependent PDE (for simplicity of notation, we consider the unidimensional case, with $\eta$ taking values in $\R$):
\begin{equation}
\label{KolmEq}
\begin{cases}
\partial_t \Uc + D^H \Uc + b(t,\eta)D^V \Uc + \frac{1}{2}\sigma(t,\eta)^2D^{VV} \Uc \\
\hspace{2.3cm}+\, F(t,\eta,\Uc,\sigma(t,\eta)D^V \Uc) \ = \ 0, \;\;\; &\quad\forall\,(t,\eta)\in[0,T[\times C([-T,0]), \\
\Uc(T,\eta) \ = \ {H}(\eta), &\quad\forall\,\eta\in C([-T,0]),
\end{cases}
\end{equation}
where $D^H\Uc$, $D^V\Uc$, $D^{VV}\Uc$ are the functional derivatives introduced in \cite{cosso_russo15a}, whose definition is recalled below. Concerning the coefficients $b\colon[0,T]\times C([-T,0])\rightarrow\R$, $\sigma\colon[0,T]\times C([-T,0])\rightarrow\R$, $F\colon[0,T]\times C([-T,0])\times\R\times\R\rightarrow\R$, and ${H}\colon C([-T,0])\rightarrow\R$ of equation \eqref{KolmEq}, we shall impose the following assumptions.
\vspace{3mm}
\noindent\textbf{(A1)} \hspace{3mm} $b$, $\sigma$, $F$, ${H}$ are Borel measurable functions satisfying, for some positive constants $C$ and $m$,
\begin{align*}
|b(t,\eta)-b(t,\eta')| + |\sigma(t,\eta)-\sigma(t,\eta')| \ &\leq \ C\|\eta-\eta'\|, \\
|F(t,\eta,y,z)-F(t,\eta,y',z')| \ &\leq \ C\big(|y-y'| + |z-z'|\big), \\
|b(t,0)| + |\sigma(t,0)| \ &\leq \ C, \\
|F(t,\eta,0,0)| + |{H}(\eta)| \ &\leq \ C\big(1 + \|\eta\|^m\big),
\end{align*}
for all $t\in[0,T]$, $\eta,\eta'\in C([-T,0])$, $y,y',z,z'\in\R$.
\subsection{Recall on functional It\^o calculus}
In the present subsection we recall the results of functional It\^o calculus needed later, without pausing on the technicalities and focusing on the intuition. For all technical details and rigorous definitions, we refer to \cite{cosso_russo15a}.
\vspace{1mm}
We begin introducing the \emph{functional derivatives}. To this end, it is useful to think of $\Uc=\Uc(t,\eta)$ as $\Uc=\Uc(t,\eta(\cdot)1_{[-T,0[}+\eta(0)1_{\{0\}})$, in order to emphasize the past $\eta(\cdot)1_{[-T,0[}$ and present $\eta(0)$ of the path $\eta$. Then, we can give, at least formally, the following definitions (see Definition 2.23 in \cite{cosso_russo15a}):
\vspace{1mm}
\noindent$\bullet$ \emph{Horizontal derivative.} We look at the sensibility of $\Uc$ with respect to a constant extension of the past $\eta(\cdot)1_{[-T,0[}$, keeping fixed the present value at $\eta(0)$:
\[
D^H \Uc(t,\eta) \ := \ \lim_{\eps\rightarrow0^+} \frac{\Uc(t,\eta(\cdot)1_{[-T,0[}+\eta(0)1_{\{0\}}) - \Uc(t,\eta(\cdot-\eps)1_{[-T,0[}+\eta(0)1_{\{0\}})}{\eps}.
\]
$\bullet$ \emph{First vertical derivative.} We look at the first variation with respect to the present, with the past fixed:
\[
D^V \Uc(t,\eta) \ := \ \lim_{\eps\rightarrow0} \frac{\Uc(t,\eta(\cdot)1_{[-T,0[}+(\eta(0)+\eps)1_{\{0\}}) - \Uc(t,\eta(\cdot)1_{[-T,0[}+\eta(0)1_{\{0\}})}{\eps}.
\]
$\bullet$ \emph{Second vertical derivative.} We look at the second variation with respect to the present, with the past fixed:
\[
D^{VV} \Uc(t,\eta) \ := \ \lim_{\eps\rightarrow0} \frac{D^V \Uc(t,\eta(\cdot)1_{[-T,0[}+(\eta(0)+\eps)1_{\{0\}}) - D^V \Uc(t,\eta(\cdot)1_{[-T,0[}+\eta(0)1_{\{0\}})}{\eps}.
\]
\vspace{1mm}
Given $I=[0,T[$ or $I=[0,T]$, we say that $\Uc\colon I\times C([-T,0])\rightarrow\R$ is of class $C^{1,2}((I\times\textup{past})\times\textup{present}))$ if, roughly speaking, $\partial_t \Uc$, $D^H \Uc$, $D^V \Uc$, and $D^{VV}\Uc$ exist and are continuous together with $\Uc$, for a rigorous definition we refer to \cite{cosso_russo15a}, Definition 2.28.
\vspace{1mm}
We can finally state the \emph{functional It\^o formula}. Firstly, we fix some notation. As in Section \ref{S:SV_Markov}, we consider a complete probability space $(\Omega,\Fc,\P)$. Given a real-valued continuous process $X = (X_t)_{t\in[0,T]}$ on $(\Omega,\Fc,\P)$, we extend it to all $t\in\R$ in a canonical way as follows: $X_t:=X_0$, $t<0$, and $X_t:=X_T$, $t>T$; then, we associate to $X$ the so-called \emph{window process} $\X=(\X_t)_{t\in\R}$, which is a $C([-T,0])$-valued process given by
\[
\X_t := \{X_{t+s},\,s\in[-T,0]\}, \qquad t\in\R.
\]
\begin{Theorem}
\label{T:ItoTime}
Let $\Uc\colon[0,T]\times C([-T,0])\rightarrow\R$ be of class $C^{1,2}(([0,T]\times\textup{past})\times\textup{present})$ and $X=(X_t)_{t\in[0,T]}$ be a real continuous finite quadratic variation process. Then, the following \textbf{functional It\^o formula} holds, $\P$-a.s.,
\begin{align*}
\Uc(t,\X_t) \ &= \ \Uc(0,\X_0) + \int_0^t \big(\partial_t \Uc(s,\X_s) + D^H \Uc(s,\X_s)\big)ds + \int_0^t D^V \Uc(s,\X_s) d^- X_s \notag \\
&\quad \ + \frac{1}{2}\int_0^t D^{VV}\Uc(s,\X_s)d[X]_s,
\end{align*}
for all $0 \leq t \leq T$.
\end{Theorem}
\begin{Remark}
{\rm
(i) The term $\int_0^t D^V \Uc(s,\X_s) d^- X_s$ denotes the \emph{forward integral} of $D^V \Uc(\cdot,\X_\cdot)$ with respect to $X$ defined by \emph{regularization} (see, e.g., \cite{russovallois91,russovallois93,russovallois07}), which coincides with the classical stochastic integral whenever $X$ is a semimartingale.
\vspace{1mm}
\noindent(ii) In the non-path-dependent case $\Uc(t,\eta)=F(t,\eta(0))$, for any $(t,\eta)\in[0,T]\times C([-T,0])$, with $F\in C^{1,2}([0,T]\times\R)$, we retrieve the finite-dimensional It\^o formula, see Theorem 2.1 of \cite{russovallois95}.
\ep
}
\end{Remark}
\subsection{Recall on strict solutions}
We recall the concept of strict solution to equation \eqref{KolmEq} from Section 3 in \cite{cosso_russo15a}.
\begin{Definition}
A map $\Uc\colon[0,T]\times C([-T,0])\rightarrow\R$ in $C^{1,2}(([0,T[\times\textup{past})\times\textup{present})\cap C([0,T]\times C([-T,0]))$, satisfying equation \eqref{KolmEq}, is called a \textbf{strict solution} to equation \eqref{KolmEq}.
\end{Definition}
We present now a probabilistic representation result, for which we adopt the same notations as
in Section \ref{SubS:Notation}, with dimension $d=1$. First, we recall some preliminary results. More precisely, for any $(t,\eta)\in[0,T]\times C([-T,0])$, we consider the \emph{path-dependent SDE}
\begin{equation}
\label{SDE}
\begin{cases}
dX_s = \ b(s,\mathbb X_s)dt + \sigma(s,\mathbb X_s)dW_s, \qquad\qquad & s\in[t,T], \\
X_s \ = \ \eta(s-t), & s\in[-T+t,t].
\end{cases}
\end{equation}
\begin{Proposition}
\label{P:SDE}
Under Assumption {\bf (A1)}, for any $(t,\eta)\in[0,T]\times C([-T,0])$ there exists a unique $($up to indistinguishability$)$ $\F$-adapted continuous process $X^{t,\eta}=(X_s^{t,\eta})_{s\in[-T+t,T]}$ strong solution to equation \eqref{SDE}. Moreover, for any $p\geq1$ there exists a positive constant $C_p$ such that
\begin{equation}
\label{EstimateX}
\E\Big[\sup_{s\in[-T+t,T]}\big|X_s^{t,\eta}\big|^p\Big] \ \leq \ C_p \big(1 + \|\eta\|^p\big).
\end{equation}
\end{Proposition}
\textbf{Proof.}
See Lemma \ref{L:SDE}.
\ep
\begin{Theorem}
\label{T:UniqStrict}
Suppose that Assumption {\bf (A1)} holds. Let $u\colon[0,T]\times C([-T,0])\rightarrow\R$ be a strict solution to equation \eqref{KolmEq} satisfying the polynomial growth condition
\begin{equation}
\label{PolGrowthCondPath}
|\Uc(t,\eta)| \ \leq \ C'\big(1 + \|\eta\|^{m'}\big), \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\end{equation}
for some positive constants $C'$ and $m'$. Then, the following Feynman-Kac formula holds
\[
\Uc(t,\eta) \ = \ Y_t^{t,\eta}, \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\]
where $(Y_s^{t,\eta},Z_s^{t,\eta})_{s\in[t,T]} = (\Uc(s,\mathbb X_s^{t,\eta}),\sigma(s,\mathbb X_s^{t,\eta})D^V\Uc(s,\mathbb X_s^{t,\eta})1_{[t,T[}(s))_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)$ is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\[
Y_s^{t,\eta} \ = \ {H}(\mathbb X_T^{t,\eta}) + \int_s^T F(r,\mathbb X_r^{t,\eta},Y_r^{t,\eta},Z_r^{t,\eta}) dr - \int_s^T Z_r^{t,\eta} dW_r, \qquad t \leq s \leq T.
\]
In particular, there exists at most one strict solution to equation \eqref{KolmEq} satisfying a polynomial growth condition as in \eqref{PolGrowthCondPath}.
\end{Theorem}
\textbf{Proof.}
See Theorem 3.4 in \cite{cosso_russo15a}.
\ep
\vspace{3mm}
We finally state the following existence result.
\begin{Theorem}
\label{T:ExistenceStrict}
Suppose that there exists $N\in\N\backslash\{0\}$ such that, for all $(t,\eta,y,z)\in[0,T]\times C([-T,0])$ $\times\R\times\R$
\begin{align*}
b(t,\eta) \ &= \ \bar b\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_N(x+t)d^-\eta(x)\bigg), \\
\sigma(t,\eta) \ &= \ \bar\sigma\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_N(x+t)d^-\eta(x)\bigg), \\
F(t,\eta,y,z) \ &= \ \bar F\bigg(t,\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_N(x+t)d^-\eta(x),y,z\bigg), \\
{H}(\eta) \ &= \ \bar{H}\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta(x),\ldots,\int_{[-T,0]}\varphi_N(x+T)d^-\eta(x)\bigg),
\end{align*}
where, we refer to Definition 2.4(i) in the companion paper \cite{cosso_russo15a} for a definition of the forward integral with respect to $\eta$) and the following assumptions are made.
\begin{itemize}
\item[\textup{(i)}] $\bar b$, $\bar\sigma$, $\bar F$, $\bar{H}$ are continuous and satisfy Assumption {\bf (A0)}.
\item[\textup{(ii)}] $\bar b$ and $\bar\sigma$ are of class $C^3$ with partial derivatives from order $1$ up to order $3$ bounded.
\item[\textup{(iii)}] For all $t\in[0,T]$, $\bar F(t,\cdot,\cdot,\cdot)\in C^3(\R^N)$ and moreover
we assume the validity of the properties below.
\begin{enumerate}
\item[\textup{(a)}] $\bar F(t,\cdot,0,0)$ belongs to $C^3$ and its third order partial derivatives satisfy a polynomial growth condition uniformly in $t$.
\item[\textup{(b)}] $D_y\bar F$, $D_z\bar F$ are bounded on $[0,T]\times\R^N\times\R\times\R$, as well as their derivatives of order one and second with respect to $x_1,\ldots,x_N,y,z$.
\end{enumerate}
\item[\textup{(iv)}] $\bar H\in C^3(\R^N)$ and its third order partial derivatives satisfy a polynomial growth condition.
\item[\textup{(v)}] $\varphi_1,\ldots,\varphi_N\in C^2([0,T])$.
\end{itemize}
Then, the map $\Uc$ given by
\[
\Uc(t,\eta) \ = \ Y_t^{t,\eta}, \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\]
where $(Y_s^{t,\eta},Z_s^{t,\eta})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)$ is the unique solution to \eqref{BSDE_SV}, is a strict solution to equation \eqref{KolmEq}.
\end{Theorem}
\textbf{Proof.}
See Theorem 3.6 in \cite{cosso_russo15a}.
\ep
\begin{Remark}
\label{R:b_sigma_time-dep}
{\rm
Notice that in Theorem \ref{T:ExistenceStrict} the functions $\bar b$ and $\bar\sigma$ do not depend on time. For the case where $\bar b$ and $\bar\sigma$ are time-dependent, we refer to Theorem 3.5 in \cite{cosso_russo15a} (notice that, in this case, $F=F(t,\eta)$ does not depend on $(y,z)$).
\ep
}
\end{Remark}
\subsection{Strong-viscosity solutions}
In the present section, we introduce the notion of strong-viscosity solution to equation \eqref{KolmEq}. To do it, we extend in a natural way Definition \ref{D:ViscosityFinite} to the present path-dependent case, see also Remark \ref{R:TwoDefinitions}.
\begin{Remark}
\label{R:MotivationSV}
{\rm
As a motivation for the introduction of a viscosity type solution for path-dependent PDEs, let us consider the following hedging example in mathematical finance, taken from Section 3.2 in \cite{cosso_russo14}. Let $b\equiv 0$, $\sigma\equiv 1$, $F\equiv0$ and consider the lookback-type payoff
\[
{H}(\eta) \ = \ \sup_{x\in[-T,0]} \eta(x), \qquad \forall\,\eta \in C([-T,0]).
\]
Then, we look for a solution to the following linear parabolic path-dependent PDE:
\begin{equation}
\label{KolmEqLinear}
\begin{cases}
\partial_t \Uc + D^H \Uc + \frac{1}{2}D^{VV} \Uc \ = \ 0, \qquad &\quad\forall\,(t,\eta)\in[0,T[\times C([-T,0]), \\
\Uc(T,\eta) \ = \ {H}(\eta), &\quad\forall\,\eta\in C([-T,0]).
\end{cases}
\end{equation}
We refer to \eqref{KolmEqLinear} as path-dependent heat equation. Notice that, however, \eqref{KolmEqLinear} does not have the smoothing effect characterizing the classical heat equation, in spite of some regularity properties illustrated in Section 3.2 of \cite{cosso_russo14}. Indeed, let us consider the functional
\[
\Uc(t,\eta) \ = \ \E\big[{H}(\mathbb W_T^{t,\eta})\big] \ = \E\Big[\sup_{-T\leq x\leq0}\mathbb W_T^{t,\eta}(x)\Big], \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\]
where, for any $t \leq s \leq T$,
\[
\mathbb W_s^{t,\eta}(x) \ = \
\begin{cases}
\eta(x+s-t), &\quad-T \leq x \leq t-s, \\
\eta(0) + W_{x+s} - W_t, \qquad &\quad t-s < x \leq 0.
\end{cases}
\]
If $\Uc\in C^{1,2}(([0,T[\times\textup{past})\times\textup{present})\cap C([0,T]\times C([-T,0]))$, then $\Uc$ could be proved to solve equation \eqref{KolmEqLinear}. However, as claimed in \cite{cosso_russo14}, $\Uc$ is not a strict solution to \eqref{KolmEqLinear}. On the other hand, since ${H}$ is continuous and has linear growth, it follows from Theorems \ref{T:UniqSV} and \ref{T:ExistSV} that $\Uc$ is the unique strong-viscosity solution to equation \eqref{KolmEqLinear}.
\ep
}
\end{Remark}
\begin{Definition}
\label{D:Strong-Visc}
A function $\Uc\colon[0,T]\times C([-T,0])\rightarrow\R$ is called a \textbf{strong-viscosity solution} to equation \eqref{KolmEq} if there exists a sequence $(\Uc_n,{H}_n,F_n,b_n,\sigma_n)_n$ of Borel measurable functions $\Uc_n\colon[0,T]\times C([-T,0])\rightarrow\R$, ${H}_n\colon C([-T,0])\rightarrow\R$, $F_n\colon[0,T]\times C([-T,0])\times\R\times\R\rightarrow\R$, $b_n\colon[0,T]\times C([-T,0])\rightarrow\R$, $\sigma_n\colon[0,T]\times C([-T,0])\rightarrow\R$, such that the following holds.
\begin{itemize}
\item[\textup{(i)}] For some positive constants $C$ and $m$,
\begin{align*}
|b_n(t,\eta)| + |\sigma_n(t,\eta)| \ &\leq \ C(1 + \|\eta\|), \\
|b_n(t,\eta) - b_n(t,\eta')| + |\sigma_n(t,\eta) - \sigma_n(t,\eta')| \ &\leq \ C\|\eta - \eta'\|, \\
|F_n(t,\eta,y,z) - F_n(t,\eta,y',z')| \ &\leq \ C\big(|y-y'| + |z-z'|\big), \\
|{H}_n(\eta)| + |F_n(t,\eta,0,0)| + |\Uc_n(t,\eta)| \ &\leq \ C\big(1 + \|\eta\|^m\big),
\end{align*}
for all $t\in[0,T]$, $\eta,\eta'\in C([-T,0])$, $y,y',z,z'\in\R$. Moreover, the functions $\Uc_n(t,\cdot)$, ${H}_n(\cdot)$, $F_n(t,\cdot,\cdot,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
\item[\textup{(ii)}] $\Uc_n$ is a strict solution to
\[
\begin{cases}
\partial_t\Uc_n + D^H\Uc_n + b_n(t,\eta)D^V\Uc_n + \frac{1}{2}\sigma_n(t,\eta)^2D^{VV}\Uc_n \\
\hspace{2.5cm} +\, F_n(t,\eta,\Uc_n,\sigma_n(t,\eta)D^V\Uc_n) \ = \ 0, \;\;\; &\;\;\forall\,(t,\eta)\in[0,T[\times C([-T,0]), \\
\Uc_n(T,\eta) \ = \ {H}_n(\eta), &\;\;\forall\,\eta\in C([-T,0]).
\end{cases}
\]
\item[\textup{(iii)}] $(\Uc_n,{H}_n,F_n,b_n,\sigma_n)_n$ converges pointwise to $(\Uc,{H},F,b,\sigma)$ as $n\rightarrow\infty$.
\end{itemize}
\end{Definition}
We present a Feynman-Kac type representation for a generic strong-viscosity solution to equation \eqref{KolmEq}, which, as a consequence, yields a uniqueness result.
\begin{Theorem}
\label{T:UniqSV}
Let Assumption {\bf (A1)} hold and let $\Uc\colon[0,T]\times C([-T,0])\rightarrow\R$ be a strong-viscosity solution to equation \eqref{KolmEq}. Then, the following Feynman-Kac formula holds
\begin{equation}
\label{Feynman-Kac}
\Uc(t,\eta) \ = \ Y_t^{t,\eta}, \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\end{equation}
where $(Y_s^{t,\eta},Z_s^{t,\eta})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)$, with $Y_s^{t,\eta}=\Uc(s,\X_s^{t,\eta})$, is the unique solution in $\S^2(t,T)\times\H^2(t,T)$ to the backward stochastic differential equation: $\P$-a.s.
\begin{equation}
\label{BSDE_SV}
Y_s^{t,\eta} \ = \ {H}(\X_T^{t,\eta}) + \int_s^T F(r,\X_r^{t,\eta},Y_r^{t,\eta},Z_r^{t,\eta}) dr - \int_s^T Z_r^{t,\eta} dW_r, \qquad t\leq s\leq T.
\end{equation}
In particular, there exists at most one strong-viscosity solution to equation \eqref{KolmEq}.
\end{Theorem}
\textbf{Proof.}
Let $(\Uc_n,{H}_n,F_n,b_n,\sigma_n)_n$ be as in Definition \ref{D:Strong-Visc} and, for any $(t,\eta)\in[0,T]\times C([-T,0])$, denote by $X^{n,t,\eta}=(X_s^{n,t,\eta})_{s\in[t,T]}$ the unique solution to equation \eqref{Xn}. Then, from Theorem \ref{T:UniqStrict} we have that $(Y_s^{n,t,\eta},Z_s^{n,t,\eta})_{s\in[t,T]}=(\Uc_n(s,\X_s^{n,t,\eta}),\sigma_n(s,\X_s^{n,t,\eta})D^V\Uc_n(s,\X_s^{n,t,\eta})1_{[t,T[}(s))_{s\in[t,T]}$ is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\[
Y_s^{n,t,\eta} \ = \ {H}_n(\X_T^{n,t,\eta}) + \int_s^T F_n(r,\X_r^{n,t,\eta},Y_r^{n,t,\eta},Z_r^{n,t,\eta}) dr - \int_s^T Z_r^{n,t,\eta} dW_r, \qquad t\leq s\leq T.
\]
We wish now to take the limit when $n$ goes to infinity in the above equation. We make use of Theorem \ref{T:LimitThmBSDE}, for which we check the assumptions. From the polynomial growth condition of $\Uc_n$ together with estimate \eqref{EstimateSupXn}, there exists, for every $p\geq1$, a constant $\tilde C_p\geq0$ such that
\begin{equation}
\label{EstimateSupYn}
\big\|Y^{n,t,\eta}\big\|_{_{\S^p(t,T)}}^p \ \leq \ \tilde C_p \big(1 + \|\eta\|^p\big), \qquad \forall\,n\in\N.
\end{equation}
Now, from Proposition \ref{P:EstimateBSDEAppendix} we have that there exists a constant $\tilde c\geq0$ (depending only on $T$ and on the Lipschitz constant $C$ of $F_n$ with respect to $(y,z)$ appearing in Definition \ref {D:Strong-Visc}(i)) such that
\[
\big\|Z^{n,t,\eta}\big\|_{_{\H^2(t,T)}}^2 \ \leq \ \tilde c \bigg(\big\|Y^{n,t,\eta}\big\|_{_{\S^2(t,T)}}^2 + \E \int_t^T |F_n(s,\X_s^{n,t,\eta},0,0)|^2 ds\bigg).
\]
Therefore, from \eqref{EstimateSupYn}, the polynomial growth condition of $F_n$, and estimate \eqref{EstimateSupXn}, we find that $\sup_n\|Z^{n,t,\eta}\|_{_{\H^2(t,T)}}^2<\infty$. Moreover, from \eqref{limXn-X} we see that, for any $s\in[t,T]$, $\|\X_s^{n,t,\eta}(\omega)-\X_s^{t,\eta}(\omega)\|\rightarrow0$, as $n\rightarrow\infty$, for $\P$-a.e. $\omega\in\Omega$. Fix such an $\omega$ and consider the set $K_\omega\subset C([-T,0])$ given by
\[
K_\omega \ := \ \big(\cup_{n\in\N} \big\{\X_s^{n,t,\eta}(\omega)\big\}\big) \cup \big\{\X_s^{t,\eta}(\omega)\big\}.
\]
Then, $K_\omega$ is a compact subset of $C([-T,0])$. Since the sequence $(F_n(s,\cdot,\cdot,\cdot))_n$ is equicontinuous on compact sets and converges pointwise to $F(s,\cdot,\cdot,\cdot)$, it follows that $(F_n(s,\cdot,\cdot,\cdot))_n$ converges to $F(s,\cdot,\cdot,\cdot)$ uniformly on compact sets. In particular, we have
\begin{align*}
&\big|F_n(s,\X_s^{n,t,\eta}(\omega),0,0) - F(s,\X_s^{t,\eta}(\omega),0,0)\big| \\
&\leq \ \sup_{\eta\in K_\omega} \big|F_n(s,\eta,0,0) - F(s,\eta,0,0)\big| + \big|F(s,\X_s^{n,t,\eta}(\omega),0,0) - F(s,\X_s^{t,\eta}(\omega),0,0)\big| \ \overset{n\rightarrow\infty}{\longrightarrow} \ 0.
\end{align*}
Similarly, we have
\begin{align*}
&\big|\Uc_n(s,\X_s^{n,t,\eta}(\omega)) - \Uc(s,\X_s^{t,\eta}(\omega))\big| \\
&\leq \ \sup_{\eta\in K_\omega} \big|\Uc_n(s,\eta) - \Uc(s,\eta)\big| + \big|\Uc(s,\X_s^{n,t,\eta}(\omega)) - \Uc(s,\X_s^{t,\eta}(\omega))\big| \ \overset{n\rightarrow\infty}{\longrightarrow} \ 0.
\end{align*}
Let us now define $Y_s^{t,\eta}:=\Uc(s,\X_s^{t,\eta})$, for all $s\in[t,T]$. We can then apply Theorem \ref{T:LimitThmBSDE} (notice that, in this case, for every $n\in\N$, the process $K^n$ appearing in Theorem \ref{T:LimitThmBSDE} is identically zero, so that $K$ is also identically zero), from which it follows that there exists $Z^{t,\eta}\in\H^2(t,T)$ such that the pair $(Y^{t,\eta},Z^{t,\eta})$ solves equation \eqref{BSDE_SV}. From Theorem 3.1 in \cite{parpen90} we have that $(Y^{t,\eta},Z^{t,\eta})$ is the unique pair in $\S^2(t,T)\times\H^2(t,T)$ satisfying equation \eqref{BSDE_SV}. This concludes the proof.
\ep
\vspace{3mm}
By Theorem \ref{T:UniqSV} we deduce Lemma \ref{L:Stability} below, which says that in Definition \ref{D:Strong-Visc} the convergence of $(\Uc_n)_n$ is indeed a consequence of the convergence of the coefficients $({H}_n,F_n,b_n,\sigma_n)_n$. This result is particularly useful to establish the existence of strong-viscosity solutions, as in the proof of Theorem \ref{T:ExistSV}.
\begin{Lemma}
\label{L:Stability}
Suppose that Assumption {\bf (A1)} holds and let $(\Uc_n,{H}_n,F_n,b_n,\sigma_n)_n$ be as in Definition \ref{D:Strong-Visc}. Then, there exists $\Uc\colon[0,T]\times C([-T,0])\rightarrow\R$ such that $(\Uc_n)_n$ converges pointwise to $\Uc$. In particular, $\Uc$ is a strong-viscosity solution to equation \eqref{KolmEq} and is given by formula \eqref{Feynman-Kac}.
\end{Lemma}
\textbf{Proof.}
Let us prove the pointwise convergence of the sequence $(\Uc_n)_{n\in\N}$ to the function $\Uc$ given by formula \eqref{Feynman-Kac}. To this end, we notice that, from Theorem \ref{T:UniqStrict}, for every $n\in\N$, $\Uc_n$ is given by
\[
\Uc_n(t,\eta) \ = \ Y_t^{n,t,\eta}, \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\]
where $(Y^{n,t,\eta},Z^{n,t,\eta}) = (\Uc_n(\cdot,\mathbb X^{n,t,\eta}),\sigma_n(\cdot,\mathbb X^{n,t,\eta})D^V\Uc_n(\cdot,\mathbb X^{n,t,\eta})1_{[t,T[})\in\S^2(t,T)\times\H^2(t,T)$ is the unique solution to the backward stochastic differential equation: $\P$-a.s.,
\[
Y_s^{n,t,\eta} \ = \ {H}_n(\mathbb X_T^{n,t,\eta}) + \int_s^T F_n(r,\mathbb X_r^{n,t,\eta},Y_r^{n,t,\eta},Z_r^{n,t,\eta}) dr - \int_s^T Z_r^{n,t,\eta} dW_r, \quad t \leq s \leq T,
\]
with
\[
X_s^{n,t,\eta} = \ \eta(0\wedge(s-t)) + \int_t^{t\vee s} b_n(r,\mathbb X_r^{n,t,\eta})dr + \int_t^{t\vee s} \sigma_n(r,\mathbb X_r^{n,t,\eta})dW_r, \;\; - T + t \leq s \leq T.
\]
Consider the function $\Uc$ given by formula \eqref{Feynman-Kac}. From estimate \eqref{EstimateBSDE2} we have that there exists a constant $C$, independent of $n\in\N$, such that
\begin{align}
|\Uc_n(t,\eta) - \Uc(t,\eta)|^2 \ &\leq \ C\,\E\big[\big|H_n(\X_T^{n,t,\eta}) - H(\X_T^{t,\eta})\big|^2\big] \label{Unketa-Unketa'} \\
&\quad \ + C\int_t^T \E\big[\big|F_n(s,\X_s^{n,t,\eta},Y_s^{t,\eta},Z_s^{t,\eta}) - F(s,\X_s^{t,\eta},Y_s^{t,\eta},Z_s^{t,\eta})\big|^2\big] ds, \notag
\end{align}
for all $t\in[0,T]$ and $\eta\in C([-T,0])$. Now we recall that
\begin{itemize}
\item[(i)] $(H_n,F_n,b_n,\sigma_n)_{n\in\N}$ converges pointwise to $({H},F,b,\sigma)$ as $n\rightarrow\infty$.
\item[(ii)] The functions $H_n(\cdot),F_n(t,\cdot,\cdot,\cdot),b_n(t,\cdot),\sigma_n(t,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
\end{itemize}
We notice that (i) and (ii) imply the following property:
\begin{itemize}
\item[(iii)] $(H_n(\eta_n),F_n(t,\eta_n,y,z),b_n(t,\eta_n),\sigma_n(t,\eta_n))$ converges to $(H(\eta),F(t,\eta,y,z),b(t,\eta),\sigma(t,\eta))$ as $n\rightarrow\infty$, $\forall\,(t,y,z)\in[0,T]\times\R\times\R$, $\forall\,(\eta_n)_{n\in\N}\subset C([-T,0])$ with $\eta_n\rightarrow\eta\in C([-T,0])$.
\end{itemize}
Let us now remind that, for any $r\in[t,T]$, we have
\[
\X_s^{n,t,\eta}(x) \ =
\begin{cases}
\eta(s-t+x), \qquad & x\in[-T,t-s], \\
X_{s+x}^{n,t,\eta}, & x\in\;]t-s,0],
\end{cases}
\quad
\X_s^{t,\eta}(x) \ =
\begin{cases}
\eta(s-t+x), \qquad & x\in[-T,t-s], \\
X_{s+x}^{t,\eta}, & x\in\;]t-s,0].
\end{cases}
\]
Therefore, for every $p\geq1$,
\begin{equation}
\label{ConvX}
\E\Big[\sup_{t\leq s\leq T} \|\X_s^{n,t,\eta} - \X_s^{t,\eta}\|_\infty^p\Big] \ = \ \E\Big[\sup_{t\leq s\leq T} |X_s^{n,t,\eta} - X_s^{t,\eta}|^p\Big] \ \overset{n\rightarrow\infty}{\longrightarrow} \ 0,
\end{equation}
where the convergence follows from \eqref{limXn-X}. Then, we claim that the following convergences in probability hold:
\begin{align}
&\big|H_n(\X_T^{n,t,\eta}) - H(\X_T^{t,\eta})\big|^2 \ \overset{\P}{\underset{n\rightarrow\infty}{\longrightarrow}} \ 0, \label{ConvH} \\
&\big|F_n(s,\X_s^{n,t,\eta},Y_s^{t,\eta},Z_s^{t,\eta}) - F(s,\X_s^{t,\eta},Y_s^{t,\eta},Z_s^{t,\eta})\big|^2 \ \overset{\P}{\underset{n\rightarrow\infty}{\longrightarrow}} \ 0, \label{ConvF}
\end{align}
for all $s\in[t,T]$. Concerning \eqref{ConvH}, we begin noting that it is enough to prove that, for every subsequence $(|H_{n_m}(\X_T^{n_m,t,\eta}) - H(\X_T^{t,\eta})|^2)_{m\in\N}$ there exists a subsubsequence which converges to zero. From \eqref{ConvX} and property (iii) above, it follows that there exists a subsubsequence $(|H_{n_{m_\ell}}(\X_T^{n_{m_\ell},t,\eta}) - H(\X_T^{t,\eta})|^2)_{\ell\in\N}$ which converges $\P$-a.s., and therefore in probability, to zero. This concludes the proof of \eqref{ConvH}. In a similar way we can prove \eqref{ConvF}.
From \eqref{ConvH} and \eqref{ConvF}, together with the uniform integrability of the sequences $(|H_n(\X_T^{n,t,\eta}) - H(\X_T^{t,\eta})|^2)_{n\in\N}$ and $(|F_n(s,\X_s^{n,t,\eta},Y_s^{t,\eta},Z_s^{t,\eta}) - F(s,\X_s^{t,\eta},Y_s^{t,\eta},Z_s^{t,\eta})|^2)_{n\in\N}$, for every $s\in[t,T]$, we deduce that
\begin{align*}
&\lim_{n\rightarrow\infty} \E\big[\big|H_n(\X_T^{n,t,\eta}) - H(\X_T^{t,\eta})\big|^2\big] \ = \ 0, \\
&\lim_{n\rightarrow\infty} \E\big[\big|F_n(s,\X_s^{n,t,\eta},Y_s^{t,\eta},Z_s^{t,\eta}) - F(s,\X_s^{t,\eta},Y_s^{t,\eta},Z_s^{t,\eta})\big|^2\big] \ = \ 0.
\end{align*}
From the second convergence, the polynomial growth condition of $F$ and $F_n$ (uniform in $n$), and standard moment estimates for $\|\X^{n,t,\eta}\|_\infty\leq\sup_{t\leq s\leq T}|X_s^{n,t,\eta}|$ (see estimate \eqref{EstimateSupXn}), it follows that
\[
\lim_{n\rightarrow\infty} \int_t^T \E\big[\big|F_n(s,\X_s^{n,t,\eta},Y_s^{t,\eta},Z_s^{t,\eta}) - F(s,\X_s^{t,\eta},Y_s^{t,\eta},Z_s^{t,\eta})\big|^2\big] ds \ = \ 0.
\]
As a consequence, we have $|\Uc_n(t,\eta)-\Uc(t,\eta)|^2\rightarrow0$ as $n\rightarrow\infty$, which concludes the proof.
\ep
\vspace{3mm}
We can now state an existence result. Notice that it holds under quite general conditions on the terminal condition $H$ of equation \eqref{KolmEq}.
\begin{Theorem}
\label{T:ExistSV}
Let Assumption {\bf (A1)} hold and suppose that ${H}$ is continuous. Suppose also that there exists a nondecreasing sequence $(N_n)_{n\in\N}\subset\N\backslash\{0\}$ such that, for all $n\in\N$ and $(t,\eta,y,z)\in[0,T]\times C([-T,0])\times\R\times\R$,
\begin{align*}
b_n(t,\eta) \ &= \ \bar b_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_{N_n}(x+t)d^-\eta(x)\bigg), \\
\sigma_n(t,\eta) \ &= \ \bar\sigma_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_{N_n}(x+t)d^-\eta(x)\bigg), \\
F_n(t,\eta,y,z) \ &= \ \bar F_n\bigg(t,\int_{[-t,0]}\varphi_1(x+t)d^-\eta(x),\ldots,\int_{[-t,0]}\varphi_{N_n}(x+t)d^-\eta(x),y,z\bigg),
\end{align*}
where the following holds.
\begin{itemize}
\item[\textup{(i)}] $\bar b_n$, $\bar\sigma_n$, $\bar F_n$ are continuous and satisfy Assumption {\bf (A0)} with constants $C$ and $m$ independent of $n$.
\item[\textup{(ii)}] For every $n\in\N$, $\bar b_n,\bar\sigma_n,\bar F_n$ satisfy items (ii) and (iii) of Theorem \ref{T:ExistenceStrict}.
\item[\textup{(iii)}] The functions $b_n(t,\cdot)$, $\sigma_n(t,\cdot)$, $F_n(t,\cdot,\cdot,\cdot)$, $n\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
\item[\textup{(iv)}] $\varphi_1,\ldots,\varphi_{N_n}\in C^2([0,T])$ are uniformly bounded with respect to $n\in\N$, together with their first derivative.
\item[\textup{(v)}] $(b_n,\sigma_n,F_n)_n$ converges pointwise to $(b,\sigma,F)$ as $n\rightarrow\infty$.
\end{itemize}
Then, the map $\Uc$ given by
\begin{equation}
\label{Feynman-Kac2}
\Uc(t,\eta) \ = \ Y_t^{t,\eta}, \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]),
\end{equation}
where $(Y_s^{t,\eta},Z_s^{t,\eta})_{s\in[t,T]}\in\S^2(t,T)\times\H^2(t,T)$ is the unique solution to \eqref{BSDE_SV}, is a strong-viscosity solution to equation \eqref{KolmEq}.
\end{Theorem}
\textbf{Proof.} We divide the proof into four steps. In the first three steps we construct an approximating sequence of smooth functions for $H$. We conclude the proof in the fourth step.
\vspace{1mm}
\noindent\textbf{Step I.} \emph{Approximation of $\eta\in C([-t,0])$, $t\in\,]0,T]$, with Fourier partial sums.} Consider the sequence $(e_i)_{i\in\N}$ of $C^{\infty}([-T,0])$ functions:
\[
e_0 = \frac{1}{\sqrt{T}}, \qquad e_{2i-1}(x) = \sqrt{\frac{2}{T}}\sin\bigg(\frac{2\pi}{T}(x+T)i\bigg), \qquad e_{2i}(x) = \sqrt{\frac{2}{T}}\cos\bigg(\frac{2\pi}{T}(x+T)i\bigg),
\]
for all $i\in\N\backslash\{0\}$. Then $(e_i)_{i\in\N}$ is an orthonormal basis of $L^2([-T,0])$. Let us define the linear operator $\Lambda\colon C([-T,0])\rightarrow C([-T,0])$ by
\[
(\Lambda\eta)(x) \ = \ \frac{\eta(0)-\eta(-T)}{T}x, \qquad x\in[-T,0],\,\eta\in C([-T,0]).
\]
Notice that $(\eta-\Lambda\eta)(-T) = (\eta-\Lambda\eta)(0)$, therefore $\eta-\Lambda\eta$ can be extended to the entire real line in a periodic way with period $T$, so that we can expand it in Fourier series. In particular, for each $n\in\N$ and $\eta\in C([-T,0])$, consider the Fourier partial sum
\begin{equation}
\label{E:s_n}
s_n(\eta-\Lambda\eta) \ = \ \sum_{i=0}^n (\eta_i-(\Lambda\eta)_i) e_i, \qquad \forall\,\eta\in C([-T,0]),
\end{equation}
where (denoting $\tilde e_i(x) = \int_{-T}^x e_i(y) dy$, for any $x\in[-T,0]$), by the integration by parts formula (2.4) of \cite{cosso_russo15a},
\begin{align}
\label{E:eta_i}
\eta_i \ = \ \int_{-T}^0 \eta(x)e_i(x) dx \ &= \ \eta(0)\tilde e _i(0) - \int_{[-T,0]} \tilde e_i(x)d^-\eta(x) = \ \int_{[-T,0]} (\tilde e_i(0) - \tilde e_i(x)) d^- \eta(x),
\end{align}
since $\eta(0) = \int_{[-T,0]}d^-\eta(x)$. Moreover we have
\begin{align}
\label{E:Lambda_eta_i}
(\Lambda\eta)_i \ &= \ \int_{-T}^0 (\Lambda\eta)(x)e_i(x) dx \ = \ \frac{1}{T} \int_{-T}^0 xe_i(x) dx \bigg(\int_{[-T,0]} d^-\eta(x) - \eta(-T)\bigg).
\end{align}
Define $\sigma_n=\frac{s_0 + s_1 + \cdots + s_n}{n+1}$. Then, by \eqref{E:s_n},
\[
\sigma_n(\eta-\Lambda\eta) \ = \ \sum_{i=0}^n \frac{n+1-i}{n+1} (\eta_i-(\Lambda\eta)_i) e_i, \qquad \forall\,\eta\in C([-T,0]).
\]
We know from Fej\'er's theorem on Fourier series (see, e.g., Theorem 3.4, Chapter III, in \cite{zygmund02}) that, for any $\eta\in C([-T,0])$, $\sigma_n(\eta-\Lambda\eta)\rightarrow\eta-\Lambda\eta$ uniformly on $[-T,0]$, as $n$ tends to infinity, and $\|\sigma_n(\eta-\Lambda\eta)\|_\infty \leq \|\eta-\Lambda\eta\|_\infty$. Let us define the linear operator $T_n\colon C([-T,0])\rightarrow C([-T,0])$ by (denoting $e_{-1}(x) = x$, for any $x\in[-T,0]$)
\begin{align*}
T_n\eta \ = \ \sigma_n(\eta-\Lambda\eta) + \Lambda\eta \ &= \ \sum_{i=0}^n \frac{n+1-i}{n+1} (\eta_i-(\Lambda\eta)_i) e_i + \frac{\eta(0) - \eta(-T)}{T}e_{-1} \notag \\
&= \ \sum_{i=0}^n \frac{n+1-i}{n+1} y_ie_i + y_{-1}e_{-1},
\end{align*}
where, using \eqref{E:eta_i} and \eqref{E:Lambda_eta_i},
\begin{align*}
y_{-1} \ &= \int_{[-T,0]} \frac{1}{T} d^-\eta(x) - \frac{1}{T}\eta(-T), \\
y_i \ &= \ \int_{[-T,0]}\bigg (\tilde e_i(0)-\tilde e_i(x)- \frac{1}{T}\int_{-T}^0 xe_i(x)dx\bigg )d^-\eta(x) + \frac{1}{T}\int_{-T}^0 xe_i(x)dx\,\eta(-T),
\end{align*}
for $i=0,\ldots,n$. Then, for any $\eta\in C([-T,0])$, $T_n\eta\rightarrow\eta$ uniformly on $[-T,0]$, as $n$ tends to infinity. Furthermore, there exists a positive constant $M$ such that
\begin{equation}
\label{E:UniformBoundT_n}
\|T_n\eta\|_\infty \ \leq \ M\|\eta\|_\infty, \qquad \forall\,n\in\N,\,\forall\,\eta\in C([-T,0]).
\end{equation}
Then, we define
\[
\tilde{H}_n(\eta) \ := \ {H}(T_n\eta), \qquad \forall\,(t,\eta)\in[0,T]\times C([-T,0]).
\]
Notice that $\tilde{H}_n$ satisfies a polynomial growth condition as in Assumption {\bf (A1)} with constants $C$ and $m$ independent of $n$. Moreover, since ${H}$ is uniformly continuous on compact sets, from \eqref{E:UniformBoundT_n} we see that $(\tilde{H}_n)_n$ is equicontinuous on compact sets. Now, we define the function $\bar{H}_n\colon\R^{n+2}\rightarrow\R$ as follows
\[
\bar{H}_n(y_{-1},\ldots,y_n) \ := \ {H}\bigg(\sum_{i=0}^n \frac{n+1-i}{n+1} y_ie_i + y_{-1}e_{-1}\bigg), \qquad \forall\,(y_{-1},\ldots,y_n)\in\R^{n+2}.
\]
Then, we have
\[
\tilde{H}_n(\eta) = \bar{H}_n\bigg(\int_{[-T,0]} \!\! \psi_{-1}(x + T) d^-\eta(x) + a_{-1}\eta(-T),\ldots,\int_{[-T,0]} \!\! \psi_n(x + T) d^-\eta(x) + a_n\eta(-T)\bigg),
\]
for all $\eta\in C([-T,0])$, $n\in\N$, where
\begin{align*}
\psi_{-1}(x) \ &= \ \frac{1}{T}, \qquad \psi_i(x) \ = \ \tilde e_i(0) - \tilde e_i(x - T) - \frac{1}{T} \int_{-T}^0 xe_i(x)dx, \qquad x\in[-T,0], \\
a_{-1} \ &= \ - \frac{1}{T}, \qquad\;\;\, a_i \ = \ \frac{1}{T} \int_{-T}^0 xe_i(x)dx.
\end{align*}
\vspace{1mm}
\noindent\textbf{Step II.} \emph{Smoothing of $\eta(-T)$ through mollifiers.} Consider the function $\phi\in C^\infty([0,\infty[)$ given by
\[
\phi(x) \ = \ c \exp\bigg(\frac{1}{x^2 - T^2}\bigg) 1_{[0,T[}(x), \qquad \forall\,x\geq0,
\]
with $c>0$ such that $\int_0^\infty \phi(x) dx=1$. Then, we define $\phi_m(x)=m\phi(mx)$, $\forall\,x\geq0$, $m\in\N$. Notice that
\begin{align*}
\int_{-T}^0 \eta(x)\phi_m(x + T) dx \ &= \ \eta(0)\tilde\phi_m(T) - \int_{[-T,0]} \tilde\phi_m(x + T) d^-\eta(x) \\
&= \ \int_{[-T,0]} \big(\tilde\phi_m(T) - \tilde\phi_m(x + T)\big) d^-\eta(x),
\end{align*}
where $\tilde\phi_m(x)=\int_0^x \phi_m(z) dz$, $x\in[0,T]$. In particular, we have
\[
\lim_{m\rightarrow\infty} \int_{[-T,0]} \big(\tilde\phi_m(T) - \tilde\phi_m(x + T)\big) d^-\eta(x) \ = \ \lim_{m\rightarrow\infty} \int_{-T}^0 \eta(x)\phi_m(x + T) dx \ = \ \eta(-T).
\]
Then, we define
\begin{align}
\label{barHn=H}
H_n(\eta) \ &:= \ \bar{H}_n\bigg(\ldots,\int_{[-T,0]} \psi_i(x + T) d^-\eta(x) + a_i\int_{[-T,0]} \big(\tilde\phi_n(T) - \tilde\phi_n(x + T)\big) d^-\eta(x),\ldots\bigg) \notag \\
&= \ {H}\bigg(T_n\eta + \bigg(\sum_{i=0}^n \frac{n+1-i}{n+1} a_i e_i + a_{-1} e_{-1}\bigg)\int_{-T}^0 \big(\eta(x) - \eta(-T)\big) \phi_n(x + T) dx\bigg) \notag \\
&= \ {H}\bigg(T_n\eta + \bigg(T_n\gamma + \frac{1}{T(T-1)} e_{-1}\bigg)\int_{-T}^0 \big(\eta(x) - \eta(-T)\big) \phi_n(x + T) dx\bigg),
\end{align}
for all $\eta\in C([-T,0])$ and $n\in\N$, where $\gamma(x):=-x/(T-1)$, $\forall\,x\in[-T,0]$. Then, the sequence $(H_n)_n$ is equicontinuous on compact sets and converges pointwise to ${H}$ as $n\rightarrow\infty$.
\vspace{1mm}
\noindent\textbf{Step III.} \emph{Smoothing of $\bar{H}_n(\cdot)$.} From \eqref{barHn=H} it follows that for any compact subset $K\subset C([-T,0])$ there exists a continuity modulus $m_K$, independent of $n\in\N$, such that
\begin{align}
\label{UniformContinuitybarH_n}
&\bigg|\bar{H}_n\bigg(\ldots,\int_{[-T,0]} \psi_i(x + T) d^-\eta_1(x) + a_i\int_{[-T,0]} \big(\tilde\phi_n(T) - \tilde\phi_n(x + T)\big) d^-\eta_1(x) + \xi_i,\ldots\bigg) \notag \\
&- \bar{H}_n\bigg(\ldots,\int_{[-T,0]} \psi_i(x + T) d^-\eta_2(x) + a_i\int_{[-T,0]} \big(\tilde\phi_n(T) - \tilde\phi_n(x + T)\big) d^-\eta_2(x) + \xi_i,\ldots\bigg)\bigg| \notag \\
&\leq \ m_K(\|\eta_1-\eta_2\|_\infty),
\end{align}
for all $\eta_1,\eta_2\in K$, $n\in\N$, $\xi=(\xi_{-1},\ldots,\xi_n)\in E_{n+2}$, where $E_{n+2}:=\{\xi=(\xi_{-1},\ldots,\xi_n)\in\R^{n+2}\colon|\xi_i|\leq 2^{-(i+1)},\,i=-1,\ldots,n\}$. Indeed, set
\[
\mathcal K \ := \ K\cup\tilde K,
\]
where
\begin{align*}
\tilde K \ := \ \bigg\{\eta\in C([-T,0])\colon\eta \ &= \ T_n\eta_1 + \Big(T_n\gamma + \frac{1}{T(T-1)} e_{-1}\Big)\int_{-T}^0 \big(\eta_1(x) - \eta_1(-T)\big) \phi_n(x + T) dx \\
&\quad \ + \sum_{i=0}^n \frac{n+1-i}{n+1} \xi_ie_i + \xi_{-1}e_{-1},\,\text{for some }\eta_1\in K,\,n\in\N,\,\xi\in E_{n+2}\bigg\}.
\end{align*}
\vspace{1mm}
\noindent\textbf{Digression.} \emph{$\mathcal K$ is a relatively compact subset of $C([-T,0])$.} Since $K$ is compact, it is enough to prove that $\tilde K$ is relatively compact. To this end, define
\begin{align*}
K_1 \ := \ \bigg\{\eta\in C([-T,0])\colon\eta \ &= \ T_n\eta_1 + \Big(T_n\gamma + \frac{1}{T(T-1)} e_{-1}\Big)\int_{-T}^0 \big(\eta_1(x) - \eta_1(-T)\big) \phi_n(x + T) dx \\
&\quad \ \text{for some }\eta_1\in K,\,n\in\N\bigg\}, \\
K_2 \ := \ \bigg\{\eta\in C([-T,0])\colon\eta \ &= \ \sum_{i=-1}^n \xi_ie_i,\,\text{for some }\,n\in\N,\,\xi\in E_{n+2}\bigg\}.
\end{align*}
Then $\tilde K\subset K_1+K_2$, where $K_1+K_2$ denotes the sum of the sets $K_1$ and $K_2$, i.e., $K_1+K_2=\{\eta\in C([-T,0])\colon\eta=\eta_1+\eta_2,\,\text{for some }\eta_1\in K_1,\,\eta_2\in K_2\}$. In order to prove that $\tilde K$ is relatively compact, it is enough to show that both $K_1$ and $K_2$ are relatively compact sets.
Firstly, let us prove that $K_1$ is relatively compact. Take a sequence $(\eta_\ell)_{\ell\in\N}$ in $K_1$. Our aim is to prove that $(\eta_\ell)_{\ell\in\N}$ admits a convergent subsequence. We begin noting that, for every $\ell\in\N$, there exist $\eta_{1,\ell}\in C([-T,0])$ and $n_\ell\in\N$ such that
\[
\eta_\ell \ = \ T_{n_\ell}\eta_{1,\ell} + \Big(T_{n_\ell}\gamma + \frac{1}{T(T-1)} e_{-1}\Big)\int_{-T}^0 \big(\eta_{1,\ell}(x) - \eta_{1,\ell}(-T)\big) \phi_{n_\ell}(x + T) dx.
\]
Let us suppose that $(n_\ell)_{\ell\in\N}$ admits a subsequence diverging
to infinity (the other cases can be treated even simpler),
still denoted by $(n_\ell)_{\ell\in\N}$. Then $T_{n_\ell}\gamma\rightarrow\gamma$ in $C([-T,0])$. Since $(\eta_{1,\ell})_{\ell\in\N}\subset K$ and $K$ is compact, there exists a subsequence, still denoted by $(\eta_{1,\ell})_{\ell\in\N}$, which converges to some $\eta_{1,\infty}\in K$. Then, $T_{n_\ell}\eta_{1,\ell}\rightarrow\eta_{1,\infty}$ as $\ell\rightarrow\infty$. Indeed
\[
\|T_{n_\ell}\eta_{1,\ell}-\eta_{1,\infty}\|_\infty \ \leq \ \|T_{n_\ell}\eta_{1,\ell}-T_{n_\ell}\eta_{1,\infty}\|_\infty + \|T_{n_\ell}\eta_{1,\infty}-\eta_{1,\infty}\|_\infty.
\]
Then, the claim follows since $T_{n_\ell}\eta_{1,\infty}\rightarrow\eta_{1,\infty}$ in $C([-T,0])$ and
\[
\|T_{n_\ell}\eta_{1,\ell}-T_{n_\ell}\eta_{1,\infty}\|_\infty \ \overset{\text{by \eqref{E:UniformBoundT_n}}}{\leq} \ M \|\eta_{1,\ell}-\eta_{1,\infty}\|_\infty \ \overset{\ell\rightarrow\infty}{\longrightarrow} \ 0.
\]
Proceeding in a similar way, we see that
\begin{align*}
\int_{-T}^0 \big(\eta_{1,\ell}(x) - \eta_{1,\ell}(-T)\big) \phi_{n_\ell}(x + T) dx \ &= \ \int_{-T}^0 \eta_{1,\ell}(x)\phi_{n_\ell}(x + T) dx - \eta_{1,\ell}(-T) \\
&\overset{\ell\rightarrow\infty}{\longrightarrow} \ \eta_{1,\infty}(-T) - \eta_{1,\infty}(-T) \ = \ 0.
\end{align*}
In conclusion, we get $\eta_\ell\rightarrow\eta_{1,\infty}$, from which the claim follows.
Let us now prove that $K_2$ is relatively compact. Let $(\eta_\ell)_{\ell\in\N}$ be a sequence in $K_2$ and let us prove that $(\eta_\ell)_{\ell\in\N}$ admits a convergent subsequence in $C([-T,0])$. We first notice that, for every $\ell\in\N$, there exists $n_\ell\in\N$ and $\xi_\ell=(\xi_{-1,\ell},\ldots,\xi_{n_\ell,\ell})\in E_{n_\ell+2}$ such that
\[
\eta_\ell \ = \ \sum_{i=-1}^{n_\ell} \xi_{i,\ell}e_i.
\]
As we already did in the proof for $K_1$, we suppose that the sequence $(n_\ell)_{\ell\in\N}$ diverges to $\infty$. Notice that, for every $i\in\{-1,0,1,2,\ldots\}$, there exists a subsequence of $(\xi_{i,\ell})_\ell$ which converges to some $\xi_{i,\infty}$ satisfying $|\xi_{i,\infty}|\leq2^{-(i+1)}$. By a diagonalisation argument we construct a subsequence of $(\eta_\ell)_{\ell\in\N}$, still denoted by $(\eta_\ell)_{\ell\in\N}$, such that for every $i$ the sequence $(\xi_{i,\ell})_{\ell\in\N}$ converges to $\xi_{i,\infty}$. As a consequence, $\eta_\ell$ converges to $\eta_\infty=\sum_{i=-1}^\infty \xi_{i,\infty}e_i$ as $\ell\rightarrow\infty$. This proves the claim.
\vspace{1mm}
\noindent\textbf{Step III (Continued).} Since $\Kc$ is a relatively compact subset of $C([-T,0])$, property \eqref{UniformContinuitybarH_n} follows from the fact that $H$ is continuous on $C([-T,0])$, and consequently uniformly continuous on $\mathcal K$.
To alleviate the presentation, we suppose, without loss of generality, that $H_n$ has the following form (with the same functions $\varphi_i$ as in the expression of $b_n,\sigma_n,F_n$)
\[
H_n(\eta) \ = \ \bar H_n\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta(x),\ldots,\int_{[-T,0]}\varphi_{N_n}(x+T)d^-\eta(x)\bigg).
\]
So that $\bar H_n\colon\R^{N_n}\rightarrow\R$. Then, property \eqref{UniformContinuitybarH_n} can be written as follows: for any compact subset $K\subset C([-T,0])$ there exists a continuity modulus $\rho_K$, independent of $n\in\N$, such that
\begin{align}
\label{UniformContinuitybarH_n2}
\bigg|\bar{H}_n\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_1(x) + \xi_1,\ldots\bigg) - \bar{H}_n\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_2(x) + \xi_1,\ldots\bigg)\bigg| \notag \\
\leq \ m_K(\|\eta_1-\eta_2\|_\infty),
\end{align}
for all $\eta_1,\eta_2\in K$, $n\in\N$, $\xi=(\xi_1,\ldots,\xi_{N_n})\in E_{N_n}$, where we recall that $E_{N_n}=\{\xi=(\xi_1,\ldots,\xi_{N_n})\in\R^{N_n}\colon|\xi_i|\leq 2^{1-i},\,i=1,\ldots,N_n\}$.
Now, for any $n$ consider the function $\rho_n\in C^\infty(\R^{N_n})$ given by
\[
\rho_n(\xi) \ = \ c \prod_{i=1}^{N_n} \exp\bigg(\frac{1}{\xi_i^2 - 2^{2(i-1)}}\bigg) 1_{\{|\xi_i|<2^{i-1}\}}, \qquad \forall\,\xi=(\xi_1,\ldots,\xi_{N_n})\in\R^{N_n},
\]
with $c>0$ such that $\int_{\R^{N_n}} \rho_n(\xi) d\xi = 1$. Set $\rho_{n,k}(\xi) := k^{N_n}\rho_n(k\,\xi)$, $\forall\,\xi\in\R^{N_n}$, $k\in\N$. Let us now define, for any $n,k\in\N$,
\[
\bar H_{n,k}(x) \ = \ \int_{\R^{N_n}} \rho_{n,k}(\xi) \bar H_n(x - \xi) d\xi \ = \ \int_{E_{N_n}} \rho_{n,k}(\xi) \bar H_n(x - \xi) d\xi,
\]
for all $(t,x,y,z)\in[0,T]\times\R^d\times\R\times\R^d$. Notice that, for any $n\in\N$, the sequence $(\bar{H}_{n,k}(\cdot))_{k\in\N}$ is equicontinuous on compact subsets of $\R^{N_n}$, satisfies a polynomial growth condition (uniform in both $n$ and $k$), converges pointwise to $\bar{H}_n(\cdot)$, and satisfies item (iv) of Theorem \ref{T:ExistenceStrict}. Then, we define
\[
{H}_{n,k}(\eta) \ = \ \bar{H}_{n,k}\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta(x),\ldots,\int_{[-T,0]}\varphi_{N_n}(x+T)d^-\eta(x)\bigg),
\]
for all $\eta\in C([-T,0])$ and $n,k\in\N$. Notice that the functions $H_{n,k}$, $n,k\in\N$, are equicontinuous on compact subsets of $C([-T,0])$. Indeed, let $K$ be a compact subset of $C([-T,0])$ and $\eta_1,\eta_2\in K$, then (using property \eqref{UniformContinuitybarH_n2} and the fact that $\int_{E_{N_n}} \rho_{n,k}(\xi) d\xi = 1$)
\begin{align*}
&|H_{n,k}(\eta_1) - H_{n,k}(\eta_2)| \\
&= \ \bigg|\bar H_{n,k}\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_1(x),\ldots\bigg) - \bar H_{n,k}\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_2(x),\ldots\bigg)\bigg| \\
&\leq \ \int_{K_{N_n}} \rho_{n,k}(\xi) \bigg|\bar H_n\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_1(x) + \xi_1,\ldots\bigg) \\
&\quad \ - \bar H_n\bigg(\int_{[-T,0]}\varphi_1(x+T)d^-\eta_2(x) + \xi_1,\ldots\bigg)\bigg| d\xi \ \leq \ m_K(\|\eta_1-\eta_2\|_\infty).
\end{align*}
This proves the equicontinuity on compact sets of $H_{n,k}$, $n,k\in\N$. Set $G:=H$, $G_n:=H_n$, and $G_{n,k}:=H_{n,k}$, for all $n,k\in\N$. Then, a direct application of Lemma \ref{L:StabilityApp} yields the existence of a subsequence $(H_{n,k_n})_{n\in\N}$ which converges pointwise to $H$. For simplicity of notation, we denote $(H_{n,k_n})_{n\in\N}$ simply by $(H_n)_{n\in\N}$.
\vspace{1mm}
\noindent\textbf{Step IV.} \emph{Conclusion.} Let us consider, for any $n\in\N$ and $(t,\eta)\in[0,T]\times C([-T,0])$, the following forward-backward system of stochastic differential equations:
\begin{equation}
\label{FBSDE}
\begin{cases}
X_s^{n,t,\eta} \ = \ \eta(0\wedge(s-t)) + \int_t^{t\vee s} b_n(r,\X_r^{n,t,\eta}) dr + \int_t^{t\vee s} \sigma_n(r,\X_r^{n,t,\eta}) dW_r, &s\in[t-T,T], \\
Y_s^{n,t,\eta} \ = \ H_n(\X_T^{n,t,\eta}) + \int_s^T F_n(r,\X_r^{n,t,\eta},Y_r^{n,t,\eta},Z_r^{n,t,\eta}) dr - \int_s^T Z_r^{n,t,\eta} dW_r, &s\in[t,T].
\end{cases}
\end{equation}
Under the assumptions on $b_n$ and $\sigma_n$, it follows from Proposition \ref{P:SDE} that there exists a unique continuous process $X^{n,t,\eta}$ strong solution to the forward equation in \eqref{FBSDE}. Moreover, from Theorem 4.1 in \cite{parpen90} it follows that, under the assumptions on $F_n$ and $H_n$, there exists a unique solution $(Y^{n,t,\eta},Z^{n,t,\eta})\in\S^2(t,T)\times\H^2(t,T)$ to the backward equation in \eqref{FBSDE}.
Then, it follows from Theorem \ref{T:ExistenceStrict} that, for any $n\in\N$, the function
\[
\Uc_n(t,\eta) \ = \ Y_t^{n,t,\eta} \ = \ \E\bigg[\int_t^T F_n(s,\mathbb X_s^{n,t,\eta},Y_s^{n,t,\eta},Z_s^{n,t,\eta})ds + {H}_n(\mathbb X_T^{n,t,\eta})\bigg],
\]
$\forall\,(t,\eta)\in[0,T]\times C([-T,0])$, is a strict solution to equation \eqref{KolmEq} with coefficients ${H}_n$, $F_n$, $b_n$, and $\sigma_n$. From estimates \eqref{EstimateSupXn} and \eqref{EstimateBSDE2} together with the polynomial growth condition of $F_n,H_n$ (uniform in $n$), we see that $\Uc_n$ satisfies a polynomial growth condition uniform in $n$.
We can now apply Lemma \ref{L:Stability} to the sequence $(\Uc_n,H_n,F_n,b_n,\sigma_n)_{n\in\N}$, from which we deduce: first, the convergence of the sequence $(\Uc_n)_{n\in\N}$ to the map $\Uc$ given by \eqref{Feynman-Kac2}; secondly, that $\Uc$ is a strong-viscosity solution to equation \eqref{KolmEq}. This concludes the proof.
\ep
\begin{Remark}
{\rm
(i) It is under investigation the problem of finding an explicit characterization of the triples $(b,\sigma,F)$ for which there exists a sequence $(b_n,\sigma_n,F_n)_n$ as in Theorem \ref{T:ExistSV}. We recall that the particular case $b\equiv0$, $\sigma\equiv1$, and $F\equiv0$ was addressed in Theorem 3.4 of \cite{cosso_russo14}.
\vspace{1mm}
\noindent(ii) The result of Theorem \ref{T:ExistSV} can be improved as follows. Items (ii) and (iii) in Theorem \ref{T:ExistSV} can be replaced by the following weaker assumption: \emph{for every compact subset $K\subset C([-T,0])$, there exists a continuity modulus $m_K$, independent of $n\in\N$, such that
\begin{align*}
\bigg|\bar{F}_n\!\bigg(\!t,\!\int_{[-t,0]}\!\!\!\varphi_1(x+t)d^-\eta_1(x) + \xi_1,\ldots,y,z\!\bigg)\! - \bar{F}_n\!\bigg(\!t,\!\int_{[-t,0]}\!\!\!\varphi_1(x+t)d^-\eta_2(x) + \xi_1,\ldots,y,z\!\bigg)\bigg| \\
+ \bigg|\bar{b}_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta_1(x) + \xi_1,\ldots\bigg) - \bar{b}_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta_2(x) + \xi_1,\ldots\bigg)\bigg| \\
+ \bigg|\bar{\sigma}_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta_1(x) + \xi_1,\ldots\bigg) - \bar{\sigma}_n\bigg(\int_{[-t,0]}\varphi_1(x+t)d^-\eta_2(x) + \xi_1,\ldots\bigg)\bigg| \\
\leq \ m_K(\|\eta_1 - \eta_2\|_\infty),
\end{align*}
for all $n\in\N$, $\eta_1,\eta_2\in K$, $y,z\in\R$, $t\in[0,T]$, $\xi\in E_{N_n}$, where $E_{N_n}=\{\xi=(\xi_1,\ldots,\xi_{N_n})\in\R^{N_n}\colon|\xi_i|\leq 2^{1-i},\,i=1,\ldots,N_n\}$.}
In this case, we perform a smoothing of $(\bar b_n,\bar\sigma_n,\bar F_n)$ by means of convolutions as we did for $\bar H_n$ in Step III of the proof of Theorem \ref{T:ExistSV}, in order to end up with a sequence of regular coefficients satisfying items (ii) and (iii) in Theorem \ref{T:ExistSV}. Then, we conclude the proof proceeding as in Step IV of the proof of Theorem \ref{T:ExistSV}.
\vspace{1mm}
\noindent(iii) In Theorem \ref{T:ExistSV} the functions $\bar b_n$ and $\bar\sigma_n$ do not depend on time, since our aim is to apply Theorem \ref{T:ExistenceStrict}, where also drift and diffusion coefficient do not depend on time. We could consider the case where $\bar b_n$ and $\bar\sigma_n$ are time-dependent, but then, as already said in Remark \ref{R:b_sigma_time-dep}, we have to take $F$ of the form $F=F(t,\eta)$, i.e., $F$ independent of $(y,z)$. Indeed, in this case, we could rely on Theorem 3.5 in \cite{cosso_russo15a} instead of Theorem \ref{T:ExistenceStrict}.
\ep
}
\end{Remark}
\appendix
\renewcommand\thesection{Appendix}
\section{}
In the present appendix we fix a complete probability space $(\Omega,\Fc,\P)$ on which a $d$-dimensional Brownian motion $W=(W_t)_{t\geq0}$ is defined. We denote $\F=(\Fc_t)_{t\geq0}$ the completion of the natural filtration generated by $W$.
\renewcommand\thesection{\Alph{subsection}}
\renewcommand\thesubsection{\Alph{subsection}.}
\subsection{Estimates for path-dependent stochastic differential equations}
Let $C([-T,0];\R^d)$ denote the Banach space of all continuous paths $\eta\colon[-T,0]\rightarrow\R^d$ endowed with the supremum norm $\|\eta\|=\sup_{t\in[0,T]}|\eta(t)|$. Notice that, when $d=1$, we simply write $C([-T,0])$ instead of $C([-T,0];\R)$. In the present section we consider the $d$-dimensional path-dependent SDE:
\begin{equation}
\label{SDE_d-dim}
\begin{cases}
dX_s = \ b(s,\mathbb X_s)dt + \sigma(s,\mathbb X_s)dW_s, \qquad\qquad & s\in[t,T], \\
X_s \ = \ \eta(s-t), & s\in[-T+t,t],
\end{cases}
\end{equation}
where on $b\colon[0,T]\times C([-T,0];\R^d)\rightarrow\R^d$ and $\sigma\colon[0,T]\times C([-T,0];\R^d)\rightarrow\R^d$ we shall impose the following assumptions.
\vspace{3mm}
\noindent\textbf{(A$_{b,\sigma}$)} \hspace{3mm} $b$ and $\sigma$ are Borel measurable functions satisfying, for some positive constant $C$,
\begin{align*}
|b(t,\eta)-b(t,\eta')| + |\sigma(t,\eta)-\sigma(t,\eta')| \ &\leq \ C\|\eta-\eta'\|, \\
|b(t,0)| + |\sigma(t,0)| \ &\leq \ C,
\end{align*}
for all $t\in[0,T]$ and $\eta,\eta'\in C([-T,0];\R^d)$.
\vspace{3mm}
Notice that equation \eqref{SDE_d-dim} on $[t,T]$ becomes equation \eqref{SDE_Markov} when $b=b(t,\eta)$ and $\sigma=\sigma(t,\eta)$ are non-path-dependent, so that they depend only on $\eta(t)$ at time $t$. On the other hand, when $d=1$ equation \eqref{SDE_d-dim} reduces to equation \eqref{SDE}.
\begin{Lemma}
\label{L:SDE}
Under Assumption {\bf (A$_{b,\sigma}$)}, for any $(t,\eta)\in[0,T]\times C([-T,0];\R^d)$ there exists a unique $($up to indistinguishability$)$ $\F$-adapted continuous process $X^{t,\eta}=(X_s^{t,\eta})_{s\in[-T+t,T]}$ strong solution to equation \eqref{SDE_Markov}. Moreover, for any $p\geq1$ there exists a positive constant $C_p$ such that
\begin{equation}
\label{EstimateSupX}
\E\Big[\sup_{s\in[-T+t,T]}\big|X_s^{t,\eta}\big|^p\Big] \ \leq \ C_p \big(1 + \|\eta\|^p\big).
\end{equation}
\end{Lemma}
\textbf{Proof.}
Existence and uniqueness follow from Theorem 14.23 in \cite{jacod79}. Concerning estimate \eqref{EstimateX} we refer to Proposition 3.1 in \cite{cosso_russo15a} (notice that in \cite{cosso_russo15a}, estimate \eqref{EstimateX} is proved for the case $d=1$; however, proceeding along the same lines, we can prove \eqref{EstimateX} for a generic $d\in\N\backslash\{0\}$).
\ep
\begin{Lemma}
\label{L:AppendixX}
Suppose that Assumption {\bf (A$_{b,\sigma}$)} holds and let $(b_n,\sigma_n)_n$ be a sequence satisfying Assumption {\bf (A$_{b,\sigma}$)} with a positive constant $C$ independent of $n$. Moreover, $(b_n,\sigma_n)$ converges pointwise to $(b,\sigma)$ as $n\rightarrow\infty$. For any $n\in\N$ and $(t,\eta)\in[0,T]\times C([-T,0];\R^d)$, denote by $X^{n,t,\eta}=(X_s^{n,t,\eta})_{s\in[-T+t,T]}$ the unique solution to the path-dependent SDE
\begin{equation}
\label{Xn}
\begin{cases}
dX_s^{n,t,\eta} = \ b_n(s,\mathbb X_s^{n,t,\eta})dt + \sigma_n(s,\mathbb X_s^{n,t,\eta})dW_s, \qquad\qquad & t\leq s\leq T, \\
X_s^{n,t,\eta} \ = \ \eta(s-t), & -T+t\leq s\leq t.
\end{cases}
\end{equation}
Then, for every $p\geq1$, we have
\begin{equation}
\label{EstimateSupXn}
\E\Big[\sup_{t\leq s\leq T} |X_s^{n,t,\eta}|^p\Big] \ \leq \ C_p \big(1 + \|\eta\|^p\big), \qquad \forall\,(t,\eta)\in [0,T]\times C([-T,0];\R^d),\,\forall\,n\in\N,
\end{equation}
for some positive constant $C_p$, and
\begin{equation}
\label{limXn-X}
\lim_{n\rightarrow\infty} \E\Big[\sup_{t\leq s\leq T} |X_s^{n,t,\eta} - X_s^{t,\eta}|^p\Big] \ = \ 0, \qquad \forall\,(t,\eta)\in [0,T]\times C([-T,0];\R^d).
\end{equation}
\end{Lemma}
\textbf{Proof.}
For any $n\in\N$ and $(t,\eta)\in[0,T]\times C([-T,0];\R^d)$, the existence and uniqueness of $(X_s^{n,t,\eta})_{s\in[-T+t,T]}$, as well as estimate \eqref{EstimateSupXn}, can be proved proceeding as in Lemma \ref{L:SDE}. It remains to prove \eqref{limXn-X}. Observe that
\[
X_s^{n,t,\eta} - X_s^{t,\eta} \ = \ \int_t^s \big(b_n(r,\X_r^{n,t,\eta}) - b(r,\X_r^{t,\eta})\big) dr + \int_t^s \big(\sigma_n(r,\X_r^{n,t,\eta}) - \sigma(r,\X_r^{t,\eta})\big) dW_r.
\]
Then, taking the $p$-th power, we get (recalling the standard inequality $(a+b)^p\leq2^{p-1}(a^p+b^p)$, for any $a,b\in\R$) that $|X_s^{n,t,\eta} - X_s^{t,\eta}|^p$ is less than or equal to
\[
2^{p-1}\bigg|\int_t^s \big(b_n(r,\X_r^{n,t,\eta}) - b(r,\X_r^{t,\eta})\big) dr\bigg|^p + 2^{p-1}\bigg|\int_t^s \big(\sigma_n(r,\X_r^{n,t,\eta}) - \sigma(r,\X_r^{t,\eta})\big) dW_r\bigg|^p.
\]
In the sequel we shall denote $c_p$ a generic positive constant which may change from line to line, independent of $n$, depending only on $T$, $p$, and the Lipschitz constant of $b_n,\sigma_n$. Taking the supremum over $s\in[t,T]$, and
applying H\"older's inequality to the drift term, we get
\begin{align}
\label{E:SupProof2}
\|\X_s^{n,t,\eta} - \X_s^{t,\eta}\|^p \ &\leq \ c_p\int_t^s \big|b_n(r,\X_r^{n,t,\eta}) - b(r,\X_r^{t,\eta})\big|^p dr \notag \\
&\quad \ + 2^{p-1}\sup_{t \leq u \leq s}\bigg|\int_t^u \big(\sigma_n(r,\X_r^{n,t,\eta}) - \sigma(r,\X_r^{t,\eta})\big) dW_r\bigg|^p.
\end{align}
Notice that
\begin{align}
\label{E:bn-b}
&\int_t^s \big|b_n(r,\X_r^{n,t,\eta}) - b(r,\X_r^{t,\eta})\big|^p dr \notag \\
&\leq \ 2^{p-1}\int_t^s \big|b_n(r,\X_r^{n,t,\eta}) - b_n(r,\X_r^{t,\eta})\big|^p dr + 2^{p-1}\int_t^s \big|b_n(r,\X_r^{t,\eta}) - b(r,\X_r^{t,\eta})\big|^p dr \notag \\
&\leq \ c_p\int_t^s \|\X_r^{n,t,\eta} - \X_r^{t,\eta}\|^p dr + 2^{p-1}\int_t^s \big|b_n(r,\X_r^{t,\eta}) - b(r,\X_r^{t,\eta})\big|^p dr.
\end{align}
In addition, from Burkholder-Davis-Gundy inequality we have
\begin{align}
\label{E:sigman-sigma}
&\E\bigg[\sup_{t \leq u \leq s}\bigg|\int_t^u \big(\sigma_n(r,\X_r^{n,t,\eta}) - \sigma(r,\X_r^{t,\eta})\big) dW_r\bigg|^p\bigg] \ \leq \ c_p \E\bigg[\int_t^s \big|\sigma_n(r,\X_r^{n,t,\eta})-\sigma(r,\X_r^{t,\eta})\big|^{p/2} dr\bigg] \notag \\
&\leq \ c_p \E\bigg[\int_t^s \big|\sigma_n(r,\X_r^{n,t,\eta})-\sigma_n(r,\X_r^{t,\eta})\big|^{p/2} dr\bigg] + c_p \E\bigg[\int_t^s \big|\sigma_n(r,\X_r^{t,\eta})-\sigma(r,\X_r^{t,\eta})\big|^{p/2} dr\bigg] \notag \\
&\leq \ c_p\int_t^s \E\big[\|\X_r^{n,t,\eta} - \X_r^{t,\eta}\|^p\big] dr + c_p \int_t^s \E\big[\big|\sigma_n(r,\X_r^{t,\eta})-\sigma(r,\X_r^{t,\eta})\big|^{p/2}\big] dr.
\end{align}
Taking the expectation in \eqref{E:SupProof2}, and using \eqref{E:bn-b} and \eqref{E:sigman-sigma}, we find
\begin{align*}
\E\big[\|\X_s^{n,t,\eta} - \X_s^{t,\eta}\|^p\big] \ &\leq \ c_p\int_t^s \E\big[\|\X_r^{n,t,\eta} - \X_r^{t,\eta}\|^p\big] dr + c_p \int_t^s \E\big[\big|b_n(r,\X_r^{t,\eta}) - b(r,\X_r^{t,\eta})\big|^p\big] dr \\
&\quad \ + c_p \int_t^T \E\big[\big|\sigma_n(r,\X_r^{t,\eta})-\sigma(r,\X_r^{t,\eta})\big|^{p/2}\big] dr.
\end{align*}
Then, applying Gronwall's lemma to the map $r\mapsto\E[\|\X_r^{n,t,\eta} - \X_r^{t,\eta}\|^p]$, we get
\begin{align*}
\E\Big[\sup_{t\leq s\leq T} |X_s^{n,t,\eta} - X_s^{t,\eta}|^p\Big] \ &\leq \ c_p \int_t^T \E\big[\big|b_n(r,\X_r^{t,\eta}) - b(r,\X_r^{t,\eta})\big|^p\big] dr \\
&\quad \ + c_p \int_t^T \E\big[\big|\sigma_n(r,\X_r^{t,\eta})-\sigma(r,\X_r^{t,\eta})\big|^{p/2}\big] dr.
\end{align*}
In conclusion, \eqref{limXn-X} follows from estimate \eqref{EstimateSupX} and Lebesgue's dominated convergence theorem.
\ep
\setcounter{Theorem}{0}
\setcounter{equation}{0}
\subsection{Estimates for backward stochastic differential equations}
We derive estimates for the norm of the $Z$ and $K$ components for supersolutions to backward stochastic differential equations, in terms of the norm of the $Y$ component. These results are standard, but seemingly not at disposal in the following form in the literature. Firstly, let us introduce a generator function $F\colon[0,T]\times\Omega\times\R\times\R^d\rightarrow\R$ satisfying the usual assumptions:
\begin{enumerate}
\item[(A.a)] $F(\cdot,y,z)$ is $\F$-predictable for every $(y,z)\in\R\times\R^d$.
\item[(A.b)] There exists a positive constant $C_F$ such that
\[
|F(s,y,z) - F(s,y',z')| \ \leq \ C_F\big(|y-y'| + |z-z'|\big),
\]
for all $y,y'\in\R$, $z,z'\in\R^d$, $ds\otimes d\P$-a.e.
\item[(A.c)] Integrability condition:
\[
\E\bigg[\int_t^T |F(s,0,0)|^2 ds\bigg] \ \leq \ M_F,
\]
for some positive constant $M_F$.
\end{enumerate}
\begin{Proposition}
\label{P:EstimateBSDEAppendix}
For any $t,T\in\R_+$, $t<T$, consider $(Y_s,Z_s,K_s)_{s\in[t,T]}$ satisfying the following.
\begin{enumerate}
\item[\textup{(i)}] $Y\in\S^2(t,T)$ and it is continuous.
\item[\textup{(ii)}] $Z$ is an $\R^d$-valued $\F$-predictable process such that $\P(\int_t^T |Z_s|^2 ds<\infty)=1$.
\item[\textup{(iii)}] $K$ is a real nondecreasing $($or nonincreasing$)$ continuous $\F$-predictable process such that $K_t = 0$.
\end{enumerate}
Suppose that $(Y_s,Z_s,K_s)_{s\in[t,T]}$ solves the BSDE, $\P$-a.s.,
\begin{equation}
\label{E:BSDE_Appendix}
Y_s \ = \ Y_T + \int_s^T F(r,Y_r,Z_r) dr + K_T - K_s - \int_s^T \langle Z_r, dW_r\rangle, \qquad t \leq s \leq T,
\end{equation}
for some generator function $F$ satisfying conditions \textup{(A.b)-(A.c)}. Then $(Z,K)\in\H^2(t,T)^d\times\A^{+,2}(t,T)$ and
\begin{equation}
\label{EstimateBSDE1}
\|Z\|_{\H^2(t,T)^d}^2 + \|K\|_{\S^2(t,T)}^2 \ \leq \ C\bigg(\|Y\|_{\S^2(t,T)}^2 + \E\int_t^T |F(s,0,0)|^2 ds\bigg),
\end{equation}
for some positive constant $C$ depending only on $T$ and $C_F$, the Lipschitz constant of $F$. If in addition $K\equiv0$, we have the standard estimate
\begin{equation}
\label{EstimateBSDE2}
\|Y\|_{\S^2(t,T)}^2 + \|Z\|_{\H^2(t,T)^d}^2 \ \leq \ C'\bigg(\E\big[|Y_T|^2\big] + \E\int_t^T |F(s,0,0)|^2 ds\bigg),
\end{equation}
for some positive constant $C'$ depending only on $T$ and $C_F$.
\end{Proposition}
\textbf{Proof.}
\emph{Proof of estimate \eqref{EstimateBSDE1}.} Let us consider the case where $K$ is nondecreasing. For every $k\in\N$, define the stopping time
\[
\tau_k \ = \ \inf\bigg\{s\geq t\colon \int_t^s |Z_r|^2 dr \geq k \bigg\} \wedge T.
\]
Then, the local martingale $(\int_t^s Y_r\langle 1_{[t,\tau_k]}(r)Z_r,dW_r\rangle)_{s\in[t,T]}$ satisfies, using Burkholder-Davis-Gundy inequality,
\[
\E\bigg[\sup_{t \leq s \leq T}\bigg|\int_t^s Y_r\langle 1_{[t,\tau_k]}(r)Z_r,dW_r\rangle\bigg|\bigg] \ < \ \infty,
\]
therefore it is a martingale. As a consequence, an application of It\^o's formula to $|Y_s|^2$ between $t$ and $\tau_k$ yields
\begin{align}
\label{E:Proof_ItoAppendix}
\E\big[|Y_t|^2\big] + \E\int_t^{\tau_k} |Z_r|^2 dr \ &= \ \E\big[|Y_{\tau_k}|^2\big] + 2\E\int_t^{\tau_k} Y_r F(r,Y_r,Z_r) dr + 2 \E\int_t^{\tau_k} Y_r dK_r.
\end{align}
In the sequel $c$ and $c'$ will be two strictly positive constants depending only on $C_F$, the Lipschitz constant of $F$.
Using (A.b) and recalling the standard inequality $ab \leq a^2 + b^2/4$, for any $a,b\in\R$, we see that
\begin{equation}
\label{E:Proof_LipschitzAppendix}
2\E\int_t^{\tau_k} Y_r F(r,Y_r,Z_r) dr \ \leq \ cT\|Y\|_{\S^2(t,T)}^2 + \frac{1}{4}\E\int_t^{\tau_k} |Z_r|^2 dr + \E\int_t^T |F(r,0,0)|^2 dr.
\end{equation}
Regarding the last term on the right-hand side in \eqref{E:Proof_ItoAppendix}, for every $\eps>0$, recalling the standard inequality $2ab \leq \eps a^2 + b^2/\eps$, for any $a,b\in\R$, we have
\begin{equation}
\label{E:YdK_Appendix}
2\E\int_t^{\tau_k} Y_r dK_r \ \leq \ \frac{1}{\eps}\|Y\|_{\S^2(t,T)}^2 + \eps\E\big[|K_{\tau_k}|^2\big].
\end{equation}
Now, from \eqref{E:BSDE_Appendix} we get
\[
K_{\tau_k} \ = \ Y_t - Y_{\tau_k} - \int_t^{\tau_k} F(r,Y_r,Z_r) dr + \int_t^{\tau_k} \langle Z_r,dW_r\rangle.
\]
Therefore, recalling that $(x_1+\cdots+x_4)\leq4(x_1^2+\cdots+x_4^2)$, for any $x_1,\ldots,x_4\in\R$
\[
\E\big[|K_{\tau_k}|^2\big] \ \leq \ 8\|Y\|_{\S^2(t,T)}^2 + 4T\E\int_t^{\tau_k} |F(r,Y_r,Z_r)|^2 dr + 4\E\bigg|\int_t^{\tau_k} \langle Z_r,dW_r\rangle\bigg|^2.
\]
From It\^o's isometry and (A.b), we obtain
\begin{equation}
\label{E:K<C_Appendix}
\E\big[|K_{\tau_k}|^2\big] \ \leq \ c'(1+T^2)\|Y\|_{\S^2(t,T)}^2 + c'(1+T)\E\int_t^{\tau_k} |Z_r|^2 dr + c'T\E\int_t^T |F(r,0,0)|^2 dr.
\end{equation}
Then, taking $\eps=1/(4c'(1+T))$ in \eqref{E:YdK_Appendix} we get
\begin{align}
\label{E:YdK2_Appendix}
2\E\int_t^{\tau_k}\!\! Y_r dK_r \ &\leq \ \frac{16c'(1+T)^2+1+T^2}{4(1+T)}\|Y\|_{\S^2(t,T)}^2 + \frac{1}{4}\E\int_t^{\tau_k}\!\! |Z_r|^2 dr + \frac{T}{4(1+T)}\E\int_t^T\!\! |F(r,0,0)|^2 dr \notag \\
&\leq \ c(1+T^2)\|Y\|_{\S^2(t,T)}^2 + \frac{1}{4}\E\int_t^{\tau_k}\!\! |Z_r|^2 dr + cT\E\int_t^T\!\! |F(r,0,0)|^2 dr.
\end{align}
Plugging \eqref{E:Proof_LipschitzAppendix} and \eqref{E:YdK2_Appendix} into \eqref{E:Proof_ItoAppendix}, we end up with
\[
\E\big[|Y_{\tau_k}|^2\big] + \frac{1}{2}\E\int_t^{\tau_k} |Z_r|^2 dr \ \leq \ c(1+T^2)\|Y\|_{\S^2(t,T)}^2 + c(1+T)\E\int_t^T |F(r,0,0)|^2 dr.
\]
Then, from monotone convergence theorem,
\begin{equation}
\label{E:Z<C_Appendix}
\E\int_t^T |Z_r|^2 dr \ \leq \ c(1+T^2)\|Y\|_{\S^2(t,T)}^2 + c(1+T)\E\int_t^T |F(r,0,0)|^2 dr.
\end{equation}
Plugging \eqref{E:Z<C_Appendix} into \eqref{E:K<C_Appendix}, and using again monotone convergence theorem, we finally obtain
\[
\|K\|_{\S^2(t,T)}^2 \ = \ \E\big[|K_T|^2\big] \ \leq \ c(1+T^3)\|Y\|_{\S^2(t,T)}^2 + c(1+T^2)\E\int_t^T |F(r,0,0)|^2 dr.
\]
\vspace{1mm}
\noindent\emph{Proof of estimate \eqref{EstimateBSDE2}.} The proof of this estimate is standard, see, e.g., Remark (b) immediately after Proposition 2.1 in \cite{elkaroui_peng_quenez97}. We just recall that it can be done in the following \emph{two steps}: \emph{first}, we apply It\^o's formula to $|Y_s|^2$, afterwards we take the expectation, then we use the Lipschitz property of $F$ with respect to $(y,z)$, and finally we apply Gronwall's lemma to the map $v(s):=\E[|Y_s|^2]$, $s\in[t,T]$. Then, we end up with the estimate
\begin{equation}
\label{EstimateBSDE_Proof}
\sup_{s\in[t,T]}\E\big[|Y_s|^2\big] + \|Z\|_{\H^2(t,T)^d}^2 \ \leq \ \bar C\bigg(\E\big[|Y_T|^2\big] + \E\int_t^T |F(s,0,0)|^2 ds\bigg),
\end{equation}
for some positive constant $\bar C$ depending only on $T$ and $C_F$. In the \emph{second} step of the proof we estimate $\|Y\|_{\S^2(t,T)}^2=\E[\sup_{t\leq s\leq T}|Y_s|^2]$ proceeding as follows: we take the square in relation \eqref{E:BSDE_Appendix}, followed by the $\sup$ over $s$ and then the expectation. Finally, the claim follows exploiting the Lipschitz property of $F$ with respect to $(y,z)$, estimate \eqref{EstimateBSDE_Proof}, and Burkholder-Davis-Gundy inequality.
\ep
\setcounter{Theorem}{0}
\setcounter{equation}{0}
\subsection{Limit theorem for backward stochastic differential equations}
We prove a limit theorem for backward stochastic differential equations designed for our purposes, which is inspired by the monotonic limit theorem of Peng \cite{peng00}, even if it is formulated under a different set of assumptions. In particular, the monotonicity of the sequence $(Y^n)_n$ is not assumed. On the other hand, we impose a uniform boundedness for the sequence $(Y^n)_n$ in $\S^p(t,T)$ for some $p>2$, instead of $p=2$ as in \cite{peng00}. Furthermore, unlike \cite{peng00}, the terminal condition and the generator function of the BSDE solved by $Y^n$ are allowed to vary with $n$.
\begin{Theorem}
\label{T:LimitThmBSDE}
Let $(F_n)_n$ be a sequence of generator functions satisfying assumption \textup{(Aa)-(Ac)}, with the same constants $C_F$ and $M_F$ for all $n$. For any $n$, let $(Y^n,Z^n,K^n)\in\S^2(t,T)\times\H^2(t,T)^d\times\A^{+,2}(t,T)$, with $Y^n$ and $K^n$ continuous, satisfying, $\P$-a.s.,
\[
Y_s^n \ = \ Y_T^n + \int_s^T F_n(r,Y_r^n,Z_r^n) dr + K_T^n - K_s^n - \int_s^T \langle Z_r^n, dW_r\rangle, \qquad t \leq s \leq T
\]
and
\[
\|Y^n\|_{\S^2(t,T)}^2 + \|Z^n\|_{\H^2(t,T)^d} + \|K^n\|_{\S^2(t,T)} \ \leq \ C, \qquad \forall\,n\in\N,
\]
for some positive constant $C$, independent of $n$. Suppose that there exist a generator function $F$ satisfying conditions \textup{(Aa)-(Ac)} and a continuous process $Y\in\S^2(t,T)$, in addition $\sup_n\|Y^n\|_{\S^p(t,T)}<\infty$ for some $p>2$, and, for some null measurable sets $N_F\subset[t,T]\times\Omega$ and $N_Y\subset\Omega$,
\begin{align*}
F_n(s,\omega,y,z) \ &\overset{n\rightarrow\infty}{\longrightarrow} \ F(s,\omega,y,z), \qquad \forall\,(s,\omega,y,z)\in(([t,T]\times\Omega)\backslash N_F)\times\R\times\R^d, \\
Y_s^n(\omega) \ &\overset{n\rightarrow\infty}{\longrightarrow} \ Y_s(\omega), \qquad\qquad\;\; \forall\,(s,\omega)\in[t,T]\times(\Omega\backslash N_Y).
\end{align*}
Then, there exists a unique pair $(Z,K)\in\H^2(t,T)^d\times\A^{+,2}(t,T)$ such that, $\P$-a.s.,
\begin{equation}
\label{E:BSDELimit}
Y_s \ = \ Y_T + \int_s^T F(r,Y_r,Z_r) dr + K_T - K_s - \int_s^T \langle Z_r, dW_r\rangle, \qquad t \leq s \leq T.
\end{equation}
In addition, $Z^n$ converges strongly $($resp. weakly$)$ to $Z$ in $\L^q(t,T;\R^d)$ $($resp. $\H^2(t,T)^d$$)$, for any $q\in[1,2[$, and $K_s^n$ converges weakly to $K_s$ in $L^2(\Omega,\Fc_s,\P)$, for any $s\in[t,T]$.
\end{Theorem}
\begin{Remark}
\label{R:Y_Sp<infty}
{\rm
Notice that, under the assumptions of Theorem \ref{T:LimitThmBSDE} (more precisely, given that $Y$ is continuous, $\sup_n\|Y^n\|_{\S^p(t,T)}<\infty$ for some $p>2$, $Y_s^n(\omega) \rightarrow Y_s(\omega)$ as $n$ tends to infinity for all $(s,\omega)\in[t,T]\times(\Omega\backslash N_Y)$), it follows that $\|Y\|_{\S^p(t,T)}<\infty$. Indeed, from Fatou's lemma we have
\begin{equation}
\label{E:Y_Sp<infty1}
\E\Big[\liminf_{n\rightarrow\infty}\sup_{t\leq s\leq T}|Y_s^n|^p\Big] \ \leq \ \liminf_{n\rightarrow\infty}\|Y^n\|_{\S^p(t,T)}^p \ < \ \infty.
\end{equation}
Moreover, since $Y$ is continuous, there exists a null measurable set $N_Y'\subset\Omega$ such that $s\mapsto Y_s(\omega)$ is continuous on $[t,T]$ for every $\omega\in\Omega\backslash N_Y'$. Then, for any $\omega\in\Omega\backslash(N_Y\cup N_Y')$, there exists $\tau(\omega)\in[t,T]$ such that
\begin{equation}
\label{E:Y_Sp<infty2}
\sup_{t \leq s \leq T}|Y_s(\omega)|^p \ = \ |Y_{\tau(\omega)}(\omega)|^p \ = \ \lim_{n\rightarrow\infty} |Y_{\tau(\omega)}^n(\omega)|^p \ \leq \ \liminf_{n\rightarrow\infty} \sup_{t\leq s\leq T}|Y_s^n(\omega)|^p.
\end{equation}
Therefore, combining \eqref{E:Y_Sp<infty1} with \eqref{E:Y_Sp<infty2}, we end up with $\|Y\|_{\S^p(t,T)}<\infty$.
\ep
}
\end{Remark}
\textbf{Proof.}
We begin proving the uniqueness of $(Z,K)$. Let $(Z,K),(Z',K')\in\H^2(t,T)^d\times\A^{+,2}(t,T)$ be two pairs satisfying \eqref{E:BSDELimit}. Taking the difference and rearranging the terms, we obtain
\[
\int_s^T \langle Z_r-Z_r',dW_r\rangle \ = \ \int_s^T \big(F(r,Y_r,Z_r)-F(r,Y_r,Z_r')\big) dr + K_T-K_s - (K_T'-K_s').
\]
Now, the right-hand side has finite variation, while the left-hand side has not finite variation, unless $Z=Z'$. This implies $Z=Z'$, from which we deduce $K=K'$.
The rest of the proof is devoted to the existence of $(Z,K)$ and it is divided in different steps.\\
\emph{Step 1. Limit BSDE.} From the assumptions, we see that there exists a positive constant $c$, independent of $n$, such that
\[
\E\int_t^T |F_n(r,Y_r^n,Z_r^n)|^2 dr \ \leq \ c, \qquad \forall\,n\in\N.
\]
It follows that the sequence $(Z_\cdot^n,F_n(\cdot,Y_\cdot^n,Z_\cdot^n))_n$ is bounded in the Hilbert space $\H^2(t,T)^d\times\L^2(t,T;\R)$. Therefore, there exists a subsequence $(Z_\cdot^{n_k},F_{n_k}(\cdot,Y_\cdot^{n_k},Z_\cdot^{n_k}))_k$ which converges weakly to some $(Z,G)\in\H^2(t,T)^d\times\L^2(t,T;\R)$. This implies that, for any $s\in[t,T]$, the following weak convergences hold in $L^2(\Omega,\Fc_s,\P)$ as $k\rightarrow\infty$:
\[
\int_t^s F_{n_k}(r,Y_r^{n_k},Z_r^{n_k}) dr \ \rightharpoonup \ \int_t^s G(r) dr, \qquad\qquad
\int_t^s \langle Z_r^{n_k},dW_r\rangle \ \rightharpoonup \ \int_t^s \langle Z_r,dW_r\rangle.
\]
Since
\[
K_s^n \ = \ Y_t^n - Y_s^n - \int_t^s F_n(r,Y_r^n,Z_r^n) dr + \int_t^s \langle Z_r^n,dW_r\rangle
\]
and, by assumption, $Y_s^n \rightarrow Y_s$ strongly in $L^2(\Omega,\Fc_s,\P)$, we also have the weak convergence, as $k\rightarrow\infty$,
\begin{equation}
\label{E:K_tau=Y}
K_s^{n_k} \ \rightharpoonup \ K_s,
\end{equation}
where
\[
K_s \ := \ Y_t - Y_s - \int_t^s G(r) dr + \int_t^s \langle Z_r,dW_r\rangle, \qquad t\leq s\leq T.
\]
Notice that $(K_s)_{t\leq s\leq T}$ is adapted and continuous, so that it is a predictable process. We have that $\E[|K_T|^2] < \infty$. Let us prove that $K$ is a nondecreasing process. For any pair $r,s$ with $t \leq r \leq s \leq T$, we have $K_r \leq K_s$, $\P$-a.s.. Indeed, let ${\xi}\in L^2(\Omega,\Fc_s,\P)$ be nonnegative, then, from the martingale representation theorem, we see that there exist a random variable $\zeta\in L^2(\Omega,\Fc_r,\P)$ and an $\F$-predictable square integrable process $\eta$ such that
\[
{\xi} \ = \ \zeta + \int_r^s \eta_u dW_u.
\]
Therefore
\begin{align*}
0 \ &\leq \ \E[{\xi}(K_s^n-K_r^n)] \ = \ \E[{\xi} K_s^n] - \E[\zeta K_r^n] - \E\bigg[\E\bigg[K_r^n\int_r^s \eta_u dW_u \bigg|\Fc_r\bigg]\bigg] \\
&= \ \E[{\xi} K_s^n] - \E[\zeta K_r^n] \ \overset{n\rightarrow\infty}{\longrightarrow} \ \E[{\xi} K_s] - \E[\zeta K_r] \ = \ \E[{\xi}(K_s-K_r)],
\end{align*}
which shows that $K_r \leq K_s$, $\P$-a.s.. As a consequence, there exists a null measurable set $N\subset\Omega$ such that $K_r(\omega) \leq K_s(\omega)$, for all $\omega\in\Omega\backslash N$, with $r,s\in\Q\cap[0,T]$, $r<s$. Then, from the continuity of $K$ it follows that it is a nondecreasing process, so that $K\in\A^{+,2}(t,T)$.
Finally, we notice that the process $Z$ in expression \eqref{E:K_tau=Y} is uniquely determined, as it can be seen identifying the Brownian parts and the finite variation parts in \eqref{E:K_tau=Y}. Thus, not only the subsequence $(Z^{n_k})_k$, but all the sequence $(Z^n)_n$ converges weakly to $Z$ in $\H^2(t,T)^d$. It remains to show that $G(r)$ in \eqref{E:K_tau=Y} is actually $F(r,Y_r,Z_r)$.\\
\emph{Step 2. Strong convergence of $(Z^n)_n$.} Let $\alpha\in(0,1)$ and consider the function $h_\alpha(y) = |\min(y-\alpha,0)|^2$, $y\in\R$. By applying Meyer-It\^o's formula combined with the occupation times formula (see, e.g., Theorem 70 and Corollary 1, Chapter IV, in \cite{protter05}) to $h_\alpha(Y_s^n-Y_s)$ between $t$ and $T$, observing that the second derivative of $h_\alpha$ in the sense of distributions is a $\sigma$-finite Borel measure on $\R$ absolutely continuous to the Lebesgue measure with density $2\cdot1_{]-\infty,\alpha[}(\cdot)$, we obtain
\begin{align*}
&\E\big[|\min(Y_t^n-Y_t-\alpha,0)|^2\big] + \E\int_t^T 1_{\{Y_s^n-Y_s<\alpha\}} |Z_s^n-Z_s|^2 ds \\
&= \ \E\big[|\min(Y_T^n-Y_T-\alpha,0)|^2\big] + 2\E\int_t^T \min(Y_s^n-Y_s-\alpha,0) \big(F_n(s,Y_s^n,Z_s^n) - G(s)\big)ds \\
&\quad \ + 2\E\int_t^T \min(Y_s^n - Y_s-\alpha,0) dK_s^n - 2\E\int_t^T \min(Y_s^n - Y_s-\alpha,0) dK_s.
\end{align*}
Since $\min(Y_s^n - Y_s-\alpha,0) dK_s^n \leq 0$, we get
\begin{align}
\label{E:min<K}
&\E\int_t^T 1_{\{Y_s^n-Y_s<\alpha\}} |Z_s^n-Z_s|^2 ds \ \leq \ \E\big[|\min(Y_T^n-Y_T-\alpha,0)|^2\big] \\
&+ 2\E\int_t^T \min(Y_s^n-Y_s-\alpha,0) \big(F_n(s,Y_s^n,Z_s^n) - G(s)\big)ds - 2\E\int_t^T \min(Y_s^n - Y_s-\alpha,0) dK_s. \notag
\end{align}
Let us study the behavior of the right-hand side of \eqref{E:min<K} as $n$ goes to infinity. We begin noting that
\begin{equation}
\label{E:ProofConvergence1}
\E\big[|\min(Y_T^n-Y_T-\alpha,0)|^2\big] \ \overset{n\rightarrow\infty}{\longrightarrow} \ \alpha^2.
\end{equation}
Regarding the second-term on the right-hand side of \eqref{E:min<K}, since the sequence $(F_n(\cdot,Y_\cdot^n,Z_\cdot^n) - G(\cdot))_n$ is bounded in $\L^2(t,T;\R)$, we have
\[
\sup_{n\in\N}\bigg(\E\bigg[\int_t^T|F_n(s,Y_s^n,Z_s^n) - G(s)|^2ds\bigg]\bigg)^{\frac{1}{2}} \ =: \ \bar c \ < \ \infty.
\]
Therefore, by Cauchy-Schwarz inequality we find
\begin{align}
\label{E:ProofConvergence2}
&\E\int_t^T |\min(Y_s^n-Y_s-\alpha,0)| |F_n(s,Y_s^n,Z_s^n) - G(s)|ds \notag \\
&\leq \bar c \bigg(\E\bigg[\int_t^T|\min(Y_s^n-Y_s-\alpha,0)|^2ds\bigg]\bigg)^{\frac{1}{2}} \ \overset{n\rightarrow\infty}{\longrightarrow} \ \bar c\sqrt{T-t}\,\alpha.
\end{align}
Concerning the last term on the right-hand side of \eqref{E:min<K}, we notice that, by assumption and Remark \ref{R:Y_Sp<infty}, there exists some $p>2$ such that, from Cauchy-Schwarz inequality,
\begin{align*}
&\sup_{n\in\N}\E\bigg[\int_t^T |\min(Y_s^n - Y_s-\alpha,0)|^{\frac{p}{2}} dK_s\bigg] \\
&\leq \ \sup_{n\in\N}\bigg(\E\Big[\sup_{t \leq s \leq T} |\min(Y_s^n - Y_s-\alpha,0)|^p \Big]\bigg)^{\frac{1}{2}} \big(\E\big[|K_T|^2\big]\big)^{\frac{1}{2}} \ < \infty.
\end{align*}
It follows that $(\min(Y_\cdot^n - Y_\cdot-\alpha,0))_n$ is a uniformly integrable sequence on $([t,T]\times\Omega,\Bc([t,T])\otimes\Fc,dK_s\otimes d\P)$. Moreover, by assumption, there exists a null measurable set $N_Y\subset\Omega$ such that $Y_s^n(\omega)$ converges to $Y_s(\omega)$, for any $(s,\omega)\notin[t,T]\times N_Y$. Notice that $dK_s\otimes d\P([t,T]\times N_Y)=0$, therefore $Y^n$ converges to $Y$ pointwise a.e. with respect to $dK_s\otimes d\P$. This implies that
\begin{equation}
\label{E:ProofConvergence3}
\E\bigg[\int_t^T |\min(Y_s^n - Y_s-\alpha,0)| dK_s\bigg] \ \overset{n\rightarrow\infty}{\longrightarrow} \ \alpha\E[K_T].
\end{equation}
By the convergence results \eqref{E:ProofConvergence1}, \eqref{E:ProofConvergence2}, and \eqref{E:ProofConvergence3}, \eqref{E:min<K} gives
\begin{equation}
\label{E:min<K2}
\limsup_{n\rightarrow\infty}\E\int_t^T 1_{\{Y_s^n-Y_s<\alpha\}} |Z_s^n-Z_s|^2 ds \ \leq \ \alpha^2 + 2\bar c\sqrt{T-t}\,\alpha + 2\alpha\E[K_T].
\end{equation}
From Egoroff's theorem, for any $\delta>0$ there exists a measurable set $A\subset[t,T]\times\Omega$, with $ds\otimes d\P(A)<\delta$, such that $(Y^n)_n$ converges uniformly to $Y$ on $([t,T]\times\Omega)\backslash A$. In particular, for any $\alpha\in]0,1[$ we have $|Y_s^n(\omega)-Y_s(\omega)| < \alpha$, for all $(s,\omega)\in([t,T]\times\Omega)\backslash A$, whenever $n$ is large enough. Therefore, from \eqref{E:min<K2} we get
\begin{align*}
&\limsup_{n\rightarrow\infty}\E\int_t^T 1_{([t,T]\times\Omega)\backslash A} |Z_s^n-Z_s|^2 ds \ = \ \limsup_{n\rightarrow\infty}\E\int_t^T 1_{([t,T]\times\Omega)\backslash A} 1_{\{Y_s^n-Y_s<\alpha\}} |Z_s^n-Z_s|^2 ds \notag \\
&\leq \ \limsup_{n\rightarrow\infty}\E\int_t^T 1_{\{Y_s^n-Y_s<\alpha\}} |Z_s^n-Z_s|^2 ds \ \leq \ \alpha^2 + 2\bar c\sqrt{T-t}\,\alpha + 2\alpha\E[K_T].
\end{align*}
Sending $\alpha\rightarrow0^+$, we obtain
\begin{equation}
\label{E:alpha-->0+}
\lim_{n\rightarrow\infty}\E\int_t^T 1_{([t,T]\times\Omega)\backslash A} |Z_s^n-Z_s|^2 ds \ = \ 0.
\end{equation}
Now, let $q\in[1,2[$; by H\"older's inequality,
\begin{align*}
&\E\int_t^T |Z_s^n-Z_s|^q ds \ = \ \E\int_t^T 1_{([t,T]\times\Omega)\backslash A} |Z_s^n-Z_s|^q ds + \E\int_t^T 1_A |Z_s^n-Z_s|^q ds \\
&\leq \ \bigg(\E\int_t^T 1_{([t,T]\times\Omega)\backslash A} |Z_s^n-Z_s|^2 ds\bigg)^{\frac{q}{2}}(T-t)^{\frac{2-q}{2}} + \bigg(\E\int_t^T |Z_s^n-Z_s|^2 ds\bigg)^{\frac{q}{2}}\delta^{\frac{2-q}{2}}.
\end{align*}
Since the sequence $(Z^n)_n$ is bounded in $\H^2(t,T)^d$, we have
\[
\sup_{n\in\N}\E\int_t^T |Z_s^n-Z_s|^2 ds \ =: \ \hat c < \infty.
\]
Therefore
\[
\E\int_t^T |Z_s^n-Z_s|^q ds \ \leq \ \bigg(\E\int_t^T 1_{([t,T]\times\Omega)\backslash A} |Z_s^n-Z_s|^2 ds\bigg)^{\frac{q}{2}}(T-t)^{\frac{2-q}{2}} + \hat c^{\frac{q}{2}}\delta^{\frac{2-q}{2}},
\]
which implies, by \eqref{E:alpha-->0+},
\[
\limsup_{n\rightarrow\infty}\E\int_t^T |Z_s^n-Z_s|^q ds \ \leq \ \hat c^{\frac{q}{2}}\delta^{\frac{2-q}{2}}.
\]
Sending $\delta\rightarrow0^+$ we deduce the strong convergence of $Z^n$ towards $Z$ in $\L^q(t,T;\R^d)$, for any $q\in[1,2[$.
Notice that, for any $q\in[1,2[$, we have (recalling the standard inequality $(x+y)^q \leq 2^{q-1}(x^q+y^q)$, for any $x,y\in\R_+$)
\begin{align*}
\E\bigg[\int_t^T |F_n(s,Y_s^n,Z_s^n) - F(s,Y_s,Z_s)|^q ds\bigg] \ &\leq \ 2^{q-1}\E\bigg[\int_t^T |F_n(s,Y_s^n,Z_s^n) - F_n(s,Y_s,Z_s)|^q ds\bigg] \\
&\;\;\; + 2^{q-1}\E\bigg[\int_t^T |F_n(s,Y_s,Z_s) - F(s,Y_s,Z_s)|^q ds\bigg].
\end{align*}
Therefore, by the uniform Lipschitz condition on $F_n$ with respect to $(y,z)$, and the convergence of $F_n$ towards $F$, we deduce the strong convergence of $(F_n(\cdot,Y_\cdot^n,Z_\cdot^n))_n$ to $F(\cdot,Y_\cdot,Z_\cdot)$ in $\L^q(t,T;\R)$, $q\in[1,2[$. Since $G(\cdot)$ is the weak limit of $(F_n(\cdot,Y_\cdot^n,Z_\cdot^n))_n$ in $\L^2(t,T;\R)$, we deduce that $G(\cdot)$ $=$ $F(\cdot,Y_\cdot,Z_\cdot)$. In conclusion, the triple $(Y,Z,K)$ solves the backward stochastic differential equation \eqref{E:BSDELimit}.
\ep
\setcounter{Theorem}{0}
\setcounter{equation}{0}
\subsection{An additional result in real analysis}
\begin{Lemma}
\label{L:StabilityApp}
Let $(G_{n,k})_{n,k\in\N}$, $(G_n)_{n\in\N}$, and $G$ be $\R^q$-valued continuous functions on $[0,T]\times X$, where $(X,d)$ is a separable metric space, and
\[
G_{n,k}(t,x) \ \overset{k\rightarrow\infty}{\longrightarrow} \ G_n(t,x), \qquad G_n(t,x) \ \overset{n\rightarrow\infty}{\longrightarrow} \ G(t,x), \qquad \forall\,(t,x)\in[0,T]\times X.
\]
Moreover, $G_{n,k}(t,x)\rightarrow G_n(t,x)$ as $k\rightarrow\infty$, for all $x\in X$, uniformly with respect to $t\in[0,T]$. Suppose also that the functions $G_{n,k}(t,\cdot)$, $n,k\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$. Then, there exists a subsequence $(G_{n,k_n})_{n\in\N}$ which converges pointwise to $G$ on $[0,T]\times X$.
\end{Lemma}
\textbf{Proof.}
We begin noting that, as a direct consequence of the assumptions of the lemma, the functions $G(t,\cdot)$, $G_n(t,\cdot)$, and $G_{n,k}(t,\cdot)$, for all $n,k\in\N$, are equicontinuous on compact sets, uniformly with respect to $t\in[0,T]$.
Let $D=\{x_1,x_2,\ldots,x_j,\ldots\}$ be a countable dense subset of $X$. Fix $n\in\N\backslash\{0\}$. Then, for any $j\in\N$ there exists $k_{n,j}\in\N$ such that
\[
|G_{n,k}(t,x_j) - G_n(t,x_j)| \ \leq \ \frac{1}{n}, \qquad \forall\,k\geq k_{n,j},\,\forall\,t\in[0,T].
\]
Set $k_n:=k_{n-1}\vee k_{n,1}\vee\cdots\vee k_{n,n}$, $\forall\,n\in\N$, where $k_{-1}:=0$. Then, we have
\[
|G_{n,k_n}(t,x_j) - G(t,x_j)| \ \overset{n\rightarrow\infty}{\longrightarrow} \ 0, \qquad \forall\,j\in\N,
\]
for all $t\in[0,T]$. It remains to prove that the convergence holds for all $(t,x)\in[0,T]\times X$. To this end, fix $x\in X$ and consider a subsequence $(x_{j_m})_{m\in\N}\subset D$ which converges to $x$. Then, the set $K$ defined by
\[
K \ := \ (x_{j_m})_{m\in\N}\cup\{x\}
\]
is a compact subset of $X$. Recall that the functions $G(t,\cdot)$ and $G_{n,k_n}(t,\cdot)$, for all $n\in\N$, are equicontinuous on $K$, uniformly with respect to $t\in[0,T]$. Then, for every $\eps>0$, there exists $\delta>0$ such that, for all $n\in\N$,
\[
|G_{n,k_n}(t,x_1) - G_{n,k_n}(t,x_2)| \ \leq \ \frac{\eps}{3}, \qquad |G(t,x_1) - G(t,x_2)| \ \leq \ \frac{\eps}{3},
\]
whenever $\|x_1-x_2\|\leq\delta$, $x_1,x_2\in K$, for all $t\in[0,T]$. Fix $t\in[0,T]$ and $x_{j_{m_0}}\in(x_{j_m})_{m\in\N}$ such that $\|x-x_{j_{m_0}}\|\leq\delta$. Then, we can find $n_0\in\N$ (possibly depending on $t$) for which $|G_{n,k_n}(t,x_{j_{m_0}})-G(t,x_{j_{m_0}})|\leq\eps/3$ for any $n\geq n_0$. Therefore, given $n\geq n_0$ we obtain
\begin{align*}
|G_{n,k_n}(t,x) - G(t,x)| \ &\leq \ |G_{n,k_n}(t,x) - G_{n,k_n}(t,x_{j_{m_0}})| + |G_{n,k_n}(t,x_{j_{m_0}}) - G(t,x_{j_{m_0}})| \\
&\quad \ + |G(t,x_{j_{m_0}}) - G(t,x)| \ \leq \ \eps.
\end{align*}
This implies that $G_{n,k_n}$ converges to $G$ at $(t,x)$, and the claim follows from the arbitrariness of $(t,x)$.
\ep
\vspace{5mm}
\noindent\textbf{Acknowledgements.} Part of the paper was done during the visit
of the second named author at the Centre for Advanced Study (CAS) at the Norwegian Academy of Science and Letters in Oslo.
That author also benefited partially from the
support of the ``FMJH Program Gaspard Monge in optimization and operation
research'' (Project 2014-1607H).
\small
\bibliographystyle{plain}
\bibliography{biblio}
\end{document} | {"config": "arxiv", "file": "1505.02927/ComparisonViscosityII_April2015Submitted.tex"} |
\begin{document}
\title{ Erd\H{o}s-R\'enyi laws for exponentially and polynomially mixing dynamical systems.}
\author{Nicolai Haydn and Matthew Nicol
\thanks{Department of Mathematics,
University of Southern California; Department of Mathematics, University of Houston.
E-mail: $<[email protected]$>$,$<[email protected]$>$.
MN would like to thank the NSF for support on NSF-DMS Grant 2009923. }
}
\date{\today}
\maketitle
\begin{abstract}
Erd\H{o}s-R\'enyi limit laws give the length scale of a time-window over which time-averages in Birkhoff sums have a
non-trivial almost-sure limit. We establish Erd\H{o}s-R\'enyi type limit laws for H\"older observables on dynamical systems modeled
by Young Towers with exponential and polynomial tails. This extends earlier results on Erd\H{o}s-R\'enyi limit laws
to a broad class of dynamical systems with some degree of hyperbolicity.
\end{abstract}
\section{Introduction}\label{sec:intro}
The Erd\H{o}s-R\'enyi fluctuation law gives the length scale of a time-window over which time-averages in Birkhoff sums have a non-trivial almost-sure limit.
It was first proved in the independent and
identically distributed (i.i.d.) case~\cite{Erd} in the following form:
\begin{prop}\label{ER} Let $(X_n)_{n\ge 1}$ be an i.i.d.\ sequence of non-degenerate random variables,
$\mathbb{E}[X_1]=0$, and
let $S_n=X_1+\cdots+X_n$. Assume that the moment generating function $\phi(t)= \mathbb{E}(e^{tX_1})$ exists in some open interval $U\subset \mR$ containing $t=0$.
For each $\alpha>0$, define $\psi_\alpha(t)= \phi(t) e^{-\alpha t}$. For those $\alpha$ for which $\psi_\alpha$ attains its minimum at a point $t_\alpha\in U$, let $c_{\alpha}=\alpha t_{\alpha} -\ln \phi (t_{\alpha})$. Then
\[
\lim_{n\to\infty} \max\{ (S_{j+[\ln n /c_{\alpha}}-S_j)/[\ln n/c_{\alpha}]: 1\le j\le n-[\ln n/c_{\alpha}]\}=\alpha
\]
\end{prop}
The existence of $\psi_{\alpha} (t)$
for all $t \in U$ implies exponential large deviations with a rate function (in fact $c_{\alpha}=I(\alpha)$ where
$I$ is the rate function, defined later) and this implies that sampling over a window length $k(n)$ of larger than logarithmic length scale (in the sense that $k(n)/\ln n \to \infty$), allows the ergodic theorem to kick in and
\[
\lim_{n\to\infty} \max\{ (S_{j+k(n)}-S_j)/k(n): 1\le j\le n-k(n)\}=0
\]
while sampling over too small a window, for example $k(n)=1$, gives similarly a trivial limit
\[
\lim_{n\to\infty} \max\{ (S_{j+k(n)}-S_j)/k(n): 1\le j\le n-k(n)\}=\|X_1\|_{\infty}
\]
Define the function
$$
\theta (n,k(n)):= \max_{0\le j\le n-k(n)}\frac{S_{j+k(n)}-S_j}{k(n)},
$$
which may be interpreted as the maximal average gain over a time window of length
$k(n)$ up to time $n$.
In the setting of coin tosses
the Erd\H{o}s-R\'enyi law gives precise
information on the maximal average gain of a player in a fair game in the case where the length of the time window
ensures $\lim_{n\to \infty} \theta (n,k(n))$ has a non-degenerate almost sure limit.
In 1986 Deheuvels, Devroye and Lynch~\cite{Deh} in the i.i.d.\ setting of Proposition~\ref{ER} gave a precise rate of convergence and showed that if
$k(n)=[\ln n/c_{\alpha}]$ then $P$~a.s:
\[
\limsup\frac{[\theta (n,k(n))-\alpha k(n)]}{\ln k(n)}=\frac{1}{2t_{\alpha}}
\]
and
\[
\liminf\frac{[\theta (n,k(n))-\alpha k(n)]}{\ln k(n)}=-\frac{1}{2t_{\alpha}}
\]
In this paper we establish Erd\H{o}s-R\'enyi limit laws for H\"older observables on dynamical systems
modeled by Young Towers~\cite{LY98,LY99} with exponential and
polynomial tails. Tails refer to the measure $\mu( R>n)$ of the return time $R$ function to the base of the tower. Our exposition
is based upon~\cite[Section 2.3]{KKM} and~\cite{Melbourne_Varandas} who present a framework more general than that of the original Tower construction of Young~\cite{LY98} in that uniform
contraction of local stable manifolds is not assumed for polynomially mixing systems in dimensions greater than $1$. We will give more details on Young Towers below but here note that H\"older observables on Young Towers with exponential (polynomial) tails
have exponential (polynomial) decay of correlations, the precise rate is encoded in the return time function.
Our results extends the work
of~\cite{Nic} from the class of non-uniformly expanding maps with exponential decay of correlations to all systems modeled by
a Young Tower, including Sinai dispersing billiard maps; diffeomorphisms of Hen\'{o}n type; polynomially
mixing billards as in~\cite{Chernov_Zhang2} (as long as the correlation decay rate is greater than $n^{-\beta}$, $\beta>1$);
smooth unimodal and multimodal maps satisfying the Collet-Eckmann conditions~\cite[Example 4.10]{KKM};
certain Viana maps~\cite[Example 4.11]{KKM}; and Lorenz-like maps. Other examples to which our results apply
are listed in~\cite{Melbourne_Varandas}.
In the setting of hyperbolic dynamical systems there are many earlier results.
Grigull~\cite{Gri} established the Erd\H{o}s-Renyi law for hyperbolic rational maps, Chazottes and Collet~\cite{Col} proved Erd\H{o}s-Renyi theorems with rates for uniformly expanding maps of the interval, while
Denker and Kabluchko~\cite{Den1} proved Erd\H{o}s-Renyi results for Gibbs-Markov dynamics. In~\cite{Denker}
Erd\H{o}s-R\'enyi limit laws for Lipschitz observations on a class of non-uniformly expanding dynamical systems, including logistic-like maps, were given as well as related results on maximal averages of a time series arising from H\"older observations
on intermittent-type maps
over a time window of polynomial
length. Kifer~\cite{Kifer1,Kifer2} has established Erd\H{o}s-R\'enyi laws for non-conventional ergodic sums and in the setting of averaging or homogenization of chaotic dynamical systems. We mention also recent related work of~\cite{Chen1,Chen2} on applications of Erd\H{o}-Reny\'i limit laws to multifractal analysis.
The main novelty of out technique is the use of the symbolic metric on the axiomatic Young Tower construction of~\cite{Melbourne_Varandas, KKM} to control the norm of the indicator function of sets of the form $(S_n > n\alpha)$ on the quotiented tower. This eliminates many difficulties involved with considering the Lipschitz
norm of such sets with respect to the Riemannian metric on the phase space of the system. The structure allows
us to consider, with small error, averaged Birkhoff sums as being constant on stable manifolds, and thence
use the decay of correlations for observables on the quotiented tower in terms of their Lipschitz and $L^{\infty}$ norms.
Our results in the case of Young Towers with exponential decay of correlations, Theorem~\ref{main}, are optimal and replicate the i.i.d case, while in the case of Young Towers with polynomial tails
we investigate windows of polynomial length and give close to optimal upper and lower bounds, Theorem~\ref{poly1} and Theorem~\ref{poly2}.
\section{Young Towers.}\label{sec-NUH}
We now describe more precisely what we mean by a non-uniformly hyperbolic dynamical system modeled by a Young Tower. Our exposition
is based upon~\cite[Section 2.3]{KKM} and~\cite{Melbourne_Varandas} who present a framework more general than that of the original Tower of Young~\cite{LY98} in that uniform
contraction of local stable manifolds is not assumed for polynomially mixing systems in dimensions greater than $1$. This set-up is very useful for the study of almost sure fluctuations of
Birkhoff sums of bounded variables.
We suppose $T$ is a diffeomorphism of a Riemannian manifold $(M,d)$, possibly with singularities. Fix a subset $\Lambda\subset M$ with a
`product structure'. Product structure means there exists a family of disjoint stable disks (sometimes called local stable manifolds) $\{W^s\}$ that cover $\Lambda$ as well as a family
of disjoint unstable disks (sometimes called local unstable manifolds) $\{W^u\}$ that cover $\Lambda$. The stable and unstable
disks containing $x\in \Lambda$ are denoted $W^s (x)$ and $W^u (x)$. Each stable disk intersects each unstable disk in precisely one point.
Suppose there is a partition $\{\Lambda_j\}$ of $\Lambda$ such that each stable disk $W^s (x)$ lies in $\Lambda_j$ if $x\in \Lambda_j$.
Suppose there exists a `return time' integer-valued function $R:\Lambda \to \N$, constant with value $R(j)$
on each partition element $\Lambda_j$, such that $T^{R(j)}(W^s (x)) \subset W^s (T^{R(j)} x)$ for all $x\in \Lambda_j$. We assume that
the greatest common denominator of the integers $\{ R(j)\}$ is $1$, which ensures that the Tower is mixing. We define the induced return
map $f: \Lambda \to \Lambda$ by $f(x)=T^{R(x)} (x)$.
For $x,y \in \Lambda$ let $s(x,y)$ be the least integer $n\ge 0$ such that $f^n (x)$ and $f^n (y)$ lie in different partition elements of $\Lambda$. We call
$s$ the separation time with respect to the map $f: \Lambda \to \Lambda$.
\noindent {\bf Assumptions:} there exist constants $K\ge 1$ and $0<\beta_1<1$ such that
(a) if $z \in W^s (x)$ then $d(f^n z, f^n x) \le K \beta_1^n$;
(b) if $z\in W^u (x)$ then $d(f^n z, f^n x)\le K \beta_1^{s(x,z)-n}$;
(c) if $z,x\in \Lambda$ then $d(T^j z,T^j x) \le K (d(z,x)+d(fz, fx))$ for all $0\le j \le \min \{ R(z), R(x)\}$.
Define an equivalence relation on $\Lambda$ by $z\sim x$ if $z\in W^s (x)$ and form the quotient space $\overline{\Lambda}=\Lambda/\sim$
with corresponding partition $\{ \overline{\Lambda_j}\}$. The return time function $R: \overline{\Lambda}\to \N $ is well-defined as
each stable disk $W^s (x)$ lies in $\Lambda_j$ if $x\in \Lambda_j$ and $T^{R(j)}(W^s (x)) \subset W^s (T^{R(j)} x)$ for all $x\in \Lambda_j$.
So we have a well-defined induced map $\bar{f}: \overline{\Lambda} \to \overline{\Lambda}$. Suppose that $\bar{f}$ and the partition $\{\overline{\Lambda_j}\}$
separates points in $\overline{\Lambda}$.
Define $d_{\beta_1} (z,x)=\beta_1^{s(z,x)}$, then $d_{\beta_1}$ is
a metric on $\overline{\Lambda}$.
Let $m$ be a reference probability measure on $\overline{\Lambda}$ (in most applications this will be normalized Lebesgue measure).
Assume that $\bar{f}:\overline{\Lambda}\to \overline{\Lambda}$ is a Gibbs-Markov uniformly expanding on $(\overline{\Lambda}, d_{\beta_1})$. By this we mean that
$\bar{f}$ is a measure-theoretic bijection from each $\overline{\Lambda_j}$ onto
$\overline{\Lambda}$.
We assume that $\bar{f}: \overline{\Lambda} \to \overline{\Lambda}$ has an invariant probability measure $\overline{\nu}$ and $0<a<\frac{d\bar{\nu}}{dm} < b$ for some constants $a,b$. We
assume that $R$ is $\overline{\nu}$-integrable and there is an $f$ invariant probability $\nu$ measure on $\Lambda$ such that $\overline{\pi}^{*} \nu=\overline{\nu}$ where
$\overline{\pi}$ is the quotient map taking $\Lambda$ onto $\Lambda/\sim$. Now we define the Young Tower
\[
\Delta =\{ x,j)\in \Lambda \times \N: 0\le j \le R(x)-1\}
\]
and the tower map $F$ by
\[
F(x,j)= \left\{ \begin{array}{ll}
(x,j+1) & \mbox{if $j < R(x)-1$};\\
(fx,0)& \mbox{if $j=R(x)-1$}.\end{array} \right.
\]
and lift $\nu$ in a standard way to an invariant probability measure $\nu_{\Delta}$ for $F: \Delta \to \Delta$. In fact $\nu_{\Delta}=\nu \times \mbox{counting measure}$.
Define the semi-conjugacy $\pi : \Delta \to M$,
$\pi (x,j)=T^j (x)$. The measure $\mu=\pi^{*} \nu_{\Delta}$ is a $T$-invariant mixing probability measure on $M$. Given an observable $\varphi: M \to \mathbb{R}$ we may lift to
an observable $\varphi: \Delta \to \mathbb{R}$ by defining $\varphi (x,j)=\varphi (T^j x)$ (we keep
the same notation for the observable). The semi-conjugacy $\pi^{*}$ allows us to transfer statistical properties from
lifted observables $\varphi$ on $(\Delta, F, \nu_{\Delta} )$ to the original observables $\varphi$ on $(T, M, \mu)$.
\section{Large deviations and rate functions.}
Before stating precisely our main result we recall the definition of rate function and some other notions of large deviations theory.
Suppose $(T, M,\mu)$ is a probability preserving transformation and $\varphi: M \to \mathbb R$ is a mean-zero integrable function i.e.\
$\int_M \varphi~d\mu =0$.
Throughout this paper we will write $S_n (\varphi ): =\varphi +\varphi\circ T +\ldots + \varphi \circ T^{n-1} $
for the $n$th ergodic sum of $\varphi$. Sometimes we will write $S_n$ instead
of $S_n(\varphi)$ for simplicity of notation or when $\varphi$ is clear from context.
\begin{defn} A mean-zero integrable function $\varphi: M \to \mathbb R$ is said to satisfy a large deviation principle with rate function $I(\alpha)$, if there exists a non-empty neighborhood $U$ of $0$ and a strictly convex function $I:U\to \mathbb R$, non-negative and vanishing only at $\alpha=0$, such that
\begin{eqnarray}\label{rate+}
\lim_{n\to\infty} \frac 1n\log \mu (S_n (\varphi) \ge n \alpha)& =& -I(\alpha)
\end{eqnarray}
for all $\alpha>0$ in $U$ and
\begin{eqnarray}\label{rate-}
\lim_{n\to\infty} \frac 1n\log \mu (S_n (\varphi) \le n \alpha) &=& -I(\alpha)
\end{eqnarray}
for all $\alpha<0$ in $U$.
\end{defn}
In the literature this is referred to as a first level or local (near the average) large deviations principle.
For H\"older observables on Young Towers with exponential tails (which are not $L^1$ coboundaries in the sense that
$\varphi\not = \psi\circ T-\psi$
for any $\psi\in L^1 (\mu)$) such an exponential large deviations result holds
with rate function $I_{\varphi} (\alpha)$~\cite{Nic,Reybellet_Young,Mel,Pollicott_Sharp}. A formula for the width
of $U$ is given in \cite{Reybellet_Young} following a standard approach but it is not useful
in concrete estimates.
\section{Erd\H{o}s-R\'enyi laws: background.}\label{erdoslaw}
Proposition~\ref{prop:erdos1} given below is found in a proof from Erd\H{o}s and R\'enyi~\cite{Erd} (see \cite[Theorem 2.4.3]{Csorgo_Revesz}, Grigull~\cite{Gri} Denker and Kabluchko~\cite{Den1} or~\cite{Denker} where this method has been used).
The Gauss bracket $[.]$ denotes the integer part of a number.
Throughout the proofs of this paper we will concentrate on the case $\alpha>0$
as the case $\alpha<0$ is identical with the obvious modifications of statements.
\begin{prop}\label{prop:erdos1}
Let $(T,M,\mu)$ be an ergodic dynamical system and $\varphi: M \to \mathbb R$ is an observable.
(a) Suppose that $\varphi$ satisfies a large deviation principle with rate function $I$ defined on the open set $U$ and assume $\mu(\varphi)=0$ Let $\alpha >0$, $\alpha \in U$ and set
$$ L_n=L_n(\alpha)=\left[\frac{\ln n}{I(\alpha)}\right]\qquad n\in\mathbb N.$$
Then the upper Erd\H{o}s-R\'enyi law holds, that is, for $\mu$ a.e. $x\in X$
$$
\limsup_{n\to\infty} \max_{0\le j\le n-L_n}\frac1{L_n}S_{L_n} (\varphi) \circ T^j (x)\le \alpha.
$$
\noindent (b) If for some constant $C>0$ and integer $\kappa\ge 0$ for each interval $A$
\begin{eqnarray}\label{tau}
\mu\!\left(\bigcap_{m=0}^{n-L_n}\{S_{L_n} (\varphi) \circ T^m\in A\}\right)&\le &C [\mu (S_{L_n}\in A)]^{n/(L_n)^{\kappa}}
\end{eqnarray}
then the lower Erd\H{o}s-R\'enyi law holds as well, that is, for $\mu$ a.e. $x\in X$
$$
\liminf_{n\to\infty} \max_{0\le j\le n-L_n}\frac1{L_n}S_{L_n} (\varphi) \circ T^j \ge \alpha.
$$
\end{prop}
\begin{rmk}
If both Assumptions (a) and (b) of Proposition~\ref{prop:erdos1} hold then
\[
\lim_{n\to \infty} \max_{0\le m\le n-L_n} \frac{S_{L_n}\circ T^m}{L_n}=\alpha.
\]
\end{rmk}
\begin{rmk}\label{rem:sum}
The proof of the lemma shows that the upper Erd\H{o}s-R\'enyi law follows from the existence of
exponential large deviations given by a rate function, while for the lower Erd\H{o}s-R\'enyi law it suffices to show that for every $\epsilon>0$
the series $\sum_{n>0} \mu (B_n (\epsilon))$, where $B_n(\epsilon)=\{\max_{0\le m\le n-L_n} S_{L_n}\circ T^m \le L_n(\alpha-\epsilon)\}$ is summable. This is usually the harder part to prove in the deterministic case.
\end{rmk}
\section{Erd\H{o}s-R\'enyi limit laws for Young Towers with exponential tails.}
We now state our main theorem in the case of exponential tails.
\begin{thm}~\label{main}
Suppose $(T,M,\mu)$ is a dynamical system modeled by a Young Tower with $\nu_{\Delta} (R>j) \le C \beta_2^j$ for some $\beta_2 \in (0,1)$ and
some constant $C_2$. Let $\varphi: M \to \mR$ be H\"older with $\int \varphi~d\mu=0$. Assume $\varphi\not = \psi\circ T-\psi$
for any $\psi\in L^1 (\mu)$. Let $I(\alpha)$ denote the non-degenerate rate function defined on an open set $U\subset \mR$ containing $0$. Define $S_n (x) =\sum_{j=0}^{n-1}\varphi (T^j x)$.
Let $\alpha >0$, $ \alpha\in U$ and define
\[
L_n=L_n(\alpha)=\left[\frac{\ln n}{I(\alpha)}\right]\qquad n\in\mathbb N.
\]
Then
$$
\lim_{n\to \infty} \max_{0\le j\le n-L_n} \frac{S_{L_n}\circ T^j (x)}{L_n}=\alpha,
$$
for $\mu$ a.e. $x\in \Omega$.
\end{thm}
\section{Proof of Theorem~\ref{main}.}
We now give the proof of Theorem~\ref{main}, beginning with some preliminary lemmas. Throughout this proof we will assume that $\varphi$ is Lipschitz, as the modification for H\"older $\varphi$ is straightforward.
The next lemma is not optimal but is useful in allowing us to go from uniform contraction along stable manifolds upon returns to the base of the Young Tower (Property (P3) of~\cite{LY98}) to estimates of the contraction along stable leaves in the whole manifold.
\begin{lemma}\label{lemma_technical}
Let $\beta_1$ be defined as in Section (2.1) Assumption (a) and $\beta_2$ be as in Theorem 2.2.
Let $D(m)=\{ (x,j)\in \Delta: |T^k W^s (x,j) | < {(\sqrt{\beta_1})}^{k} \mbox{ for all } k\ge m \} $. Then for any $\delta >0$ there exists
$K(\delta) >0$ such that for all $m\ge K$, $\nu_{\Delta} (D (m)^c)\le C {(\beta_2+\delta)}^{m/2} $ for some constant $C>0$.
\end{lemma}
\begin{proof}
Let $\tau_r(x,j):=\# \{ k: 1< k \le r: F^k (x,j) \in \Lambda \}$, so that $\tau_r(x,j)$ denotes the number of times $k\in[1,r]$ that $F^k (x,j)$ lies in the base of the Young Tower.
Let $B_r=\{ (x,j) \in \Delta : \tau_r (x,j) \le \sqrt{r}\}$.
If $\tau_r (x,j) \le \sqrt{r}$ then there is at least one $k\in[0,r]$, such that $R(F^k (x,j)) >\sqrt{r}$ and hence
$B_r\subset \bigcup_{k=1}^r F^{-k} (R > \sqrt{r})$. Thus
$\nu_{\Delta} ( B_r )\le r \nu(R>\sqrt{r}) < C_2 r {\beta_2}^{ \sqrt{r}}$.
Suppose now that $(x,j) \in B_r^c$. Then $|T^{r}W_s((x,j))|\le 2K
\beta_1^{ \sqrt{r}}$ by (a) and (c) and moreover
$\nu_{\Delta} (\bigcup_{r\ge m} ( B_r ) )\le \sum_{r\ge m} C_2 r {\beta_2}^{ \sqrt{r}}$. Now the lemma follows from a
straightforward calculation.
\end{proof}
\begin{cor}\label{cor_technical}
Lift $\varphi: M \to \mR$ to $\varphi :\Delta \to \mR$ by defining $\varphi (x,j)=\varphi (T^j x)$.
Let $\beta_1$ be defined as in Section (2.1) (a).
Suppose $p\in D(m)=\{ (x,j)\in \Delta: |T^k W^s (x,j) | < {(\sqrt{\beta_1})}^{k} \mbox{ for all } k\ge m \} $ and let $L_n=[\frac{\ln n}{I(\alpha)}]$. Then if $q\in W^s (p)$,
$|S_{L_n} \varphi \circ F^m (p) - S_{L_n} \varphi \circ F^m (q)|\le C \|\varphi \|_{\infty} L_n {\beta_1}^{m/2}$.
\end{cor}
\begin{proof}[Proof of Theorem~\ref{main}]
The main idea of the proof of Theorem~\ref{main} is to approximate functions on $\Delta$ by functions constant on stable manifolds, so that correlation decay
estimates on the quotiented tower from \cite[Corollary 2.9]{KKM} can be used.
We define an equivalence relation on $\Lambda$ by $z\sim x$ if $z\in W^s (x)$ and form the quotient space $\overline{\Lambda}=\Lambda/\sim$
with corresponding partition $\{ \overline{\Lambda_j}\}$. The return time function $R: \overline{\Lambda}\to \N $ is well-defined (and the same in the quotiented and
unquotiented tower) as
each stable disk $W^s (x)$ lies in $\Lambda_j$ if $x\in \Lambda_j$ and $T^{R(j)}(W^s (x)) \subset W^s (T^{R(j)} x)$ for all $x\in \Lambda_j$.
So we have a well-defined induced map $\bar{f}: \overline{\Lambda} \to \overline{\Lambda}$. We similarly define the
quotient space of $\Delta$, denoted $\overline{\Delta}$. The separation time for $f: \overline{\Lambda}
\to \overline{\Lambda}$ extends to a separation time on $\overline{\Delta}$ by defining
\[ s((x,l),(y,l^{'})) =\left\{ \begin{array}{ll}
s(x,y) & \mbox{if $l=l^{'}$};\\
1 & \mbox{if $l \neq l^{'}$}.\end{array} \right. \]
We fix $\beta_1$ from Section 2.1 Assumption (a) and define the metric $d_{\beta_1}$ on $\overline{\Delta}$ by $d_{\beta_1} (p,q)=\beta_1^{s(p,q)}$. Here we write
$p=(x,l)\in \overline{\Delta}$, $q=(y, l^{'})$.
We define the $\|\cdot\|_{\beta_1}$-norm by $\|\phi \|_{\beta_1}:=\|\phi\|_{\infty} + \sup_{p,q\in \Delta} \frac{|\phi (p)-\phi (q)|}{d_{\beta_1} (p,q)}$.
Functions $\phi$ and $\psi$ which are constant on stable manifolds in $\Delta$ naturally project to functions $\phi$ and $\psi$ (we use the same notation) on $\overline{\Delta}$
with the same $d_{\beta_1}$ Lipschitz constant and $L^{\infty}$ norm. If $\phi: \Delta \to \mR$
is constant on stable manifolds we define the $\|\cdot\|_{\beta_1}$-norm by $\|\phi \|_{\beta_1}:=\|\phi\|_{\infty} + \sup_{p,q\in \Delta} \frac{|\phi (p)-\phi (q)|}{d_{\beta_1} (p,q)}$.
With this set-up the correlation estimate of ~\cite[Corollary 2.9]{KKM} can be stated:
\begin{prop}~\cite[Corollary 2.9]{KKM}
Suppose that $\phi,~\psi: \Delta \to \mR$ are constant on stable manifolds then for some constants $C$, $\beta_3\in (0,1)$,
\[
| \int_{\Delta} \phi (\psi\circ F^j)\, d\nu_{\Delta} - \int_{\Delta} \phi \, d\nu_{\Delta} \int_{\Delta} \psi \, d\nu_{\Delta}|\le C\|\phi\|_{\beta_1} \|\psi\|_{\infty} \beta_3^j
\]
for all $j\ge 0$ .
\end{prop}
In the case that $\varphi$ is not an $L^1$ coboundary i.e. there exists no $\psi$ such that
$\varphi =\psi \circ T -\psi$, $\psi \in L^1 (m)$ it has been shown~\cite{Nic,Reybellet_Young} under the assumptions
of Theorem~\ref{main} that $\varphi$ has exponential large deviations with a rate function $I(\alpha)$. Thus assumption (a)
of Proposition~\ref{prop:erdos1} holds and we therefore only need to prove
$\mu (\{\max_{0\le m\le n-L_n} S_{L_n}\circ T^m \le L_n(\alpha-\epsilon)\})$ is summable
in order to get the lower bound by an application of the Borel-Cantelli lemma.
This direction is more difficult and uses
differential and dynamical information on the system.
For the reader's convenience we recall our assumptions:
\noindent {\it Assumptions:} there exist constants $K\ge 1$ and $0<\beta_1<1$ such that
(a) if $z \in W^s (x)$ then $d(f^n z, f^n x) \le K \beta_1^n$;
(b) if $z\in W^u (x)$ then $d(f^n z, f^n x)\le K \beta_1^{s(x,z)-n}$;
(c) if $z,x\in \Lambda$ then $d(T^j z,T^j x) \le K (d(z,x)+d(fz, fx))$ for all $0\le j \le \min \{ R(z), R(x)\}$.
We lift $\varphi$ from $M$ to $\Delta$ by defining $\varphi (x,j)=\varphi (T^j x)$. We will use the same notation for
$\varphi$ on $\Delta$ as we use for $\varphi$ on $M$.
To simplify notation we will sometimes write $p=(x,j)$ for a point $p\in \Delta$.
For $0<\epsilon \ll \alpha$ put
$$
A_n(\epsilon):=\{ (x,j) \in \Delta: S_{L_n} \le L_n (\alpha-\epsilon)\},
$$
where
$$
S_n (x,j)=\sum_{k=0}^{n-1} \varphi \circ F^k (x,j)
$$
is the $n$th ergodic sum of $\varphi$.
Define
$$
B_n (\epsilon)=\bigcap_{m=0}^{n-L_n} F^{-m}A_n(\epsilon)
=\left\{ (x,j) \in \Delta: \max_{0\le m\le n-L_n}S_{l_n}\circ F^m \le L_n (\alpha -\epsilon)\right\}.
$$
The theorem follows by the Borel-Cantelli lemma once we show that
$\sum_{n=1}^{\infty} \nu_{\Delta} (B_n(\epsilon)) <\infty$.
To do this we will use a blocking argument to take advantage of decay of correlations and
intercalate by blocks of length $\kappa_n:=\ln^{\kappa}(n)$, where $\kappa$ will be specified later.
For $1\le j <r_n:= [\frac{n}{\kappa_n}]$ put
\[
E_n^j (\epsilon):=\bigcap_{m=1}^{j} F^{-m[\kappa_n]}A_n(\epsilon)
\]
which is a nested sequence of sets. Note that
$\nu_{\Delta} (B_n (\epsilon) )\le \nu_{\Delta} (E_n^{r_n} (\epsilon) )$.
We also have the recursion
\[
E_n^{j}(\epsilon)=A_n (\epsilon) \cap F^{-\kappa_n}E_n^{j-1}(\epsilon)
\]
$j=1,\dots ,r_n$, which implies
\[
\nu_{\Delta} (E_n^{j}(\epsilon))=\nu_{\Delta} (A_n (\epsilon) \cap F^{-\kappa_n}E_n^{j-1}(\epsilon) )
\]
Recall $D(m)=\{ (x,j) \in \Delta : |T^k W^s (x) | <(\sqrt{\beta_1})^{k} \mbox{ for all } k\ge m \}$.
Hence given $\delta >0$ such that $\beta_2^{'}:=\beta_2+\delta<1$ by Lemma~\ref{lemma_technical}
we may estimate $\nu_{\Delta} (D(\kappa_n)^c) \le (\beta_2^{'})^{\kappa_n/2}$ for sufficiently large $n$.
Furthermore if $m\ge \kappa_n$, $p\in D(m)$ and $q\in W^s (p)$ then $|S_{L_n}\circ F^m(p)-S_{L_n}\circ F^m(q)|\le C\|\varphi\|_{\infty}L_n \beta_1^{\kappa_n/2}$
by the corollary to Lemma~\ref{lemma_technical}. We will take $\kappa$ and $n$ large enough that $C\|\varphi\|_{\infty}L_n \beta_1^{\kappa_n/2}<\frac{\epsilon}{2}$.
Accordingly for
large $n$ if $m\ge \kappa_n$, $p\in D(m)\cap F^{-m} A_n (\epsilon) $ and $q\in W^s (p)$ then $F^m q \in A_n (\frac{\epsilon}{2} )$.
\vspace{.5cm}
\noindent {\it First Approximation.}
We now approximate $1_{A_n (\epsilon)\cap D(\kappa_n)}$ by a function $g_n^{\epsilon}$ which is constant on stable manifolds by
requiring that if $p \in A_n (\epsilon) \cap D(\kappa_n)$ then $g_n^{\epsilon} (p)=1$ on $W^s(p)$ and $g_n^{\epsilon}=0$ otherwise.
Thus $ \{ g_n^{\epsilon} =1\} \subset A_n (\frac{\epsilon}{2})$
and
\[
\nu_{\Delta} (g_n=1)\le \nu_{\Delta} (A_n (\frac{\epsilon}{2}))
\]
Furthermore
\[
A_n (\epsilon) \subset \{ g_n^{\epsilon}=1\} \cup D(\kappa_n)^c
\]
hence
\[
\nu_{\Delta} (A_n (\epsilon)) \le \nu_{\Delta} (g_n^{\epsilon}=1) + \nu_{\Delta} (D(\kappa_n)^c).
\]
For $j=1,\ldots, r_n$ let
\[
G_n^j (\epsilon)=:\prod_{i=1}^j g_{n}^{\epsilon} \circ F^{i[\kappa_n]}
\]
and note $\nu_{\Delta} (E_n^j (\epsilon)) \le \nu_{\Delta} (G_n^j (\epsilon)) + j \nu_{\Delta} (D(\kappa_n)^c)$.
\vspace{.5cm}
\noindent {\it Second Approximation}
We will approximate $g_n^{\epsilon}$ (considered as a function on $\overline{\Delta}$) by
a $d_{\beta_1}$ Lipschitz function $h_n^{\epsilon}$ which extends to a function on $\Delta$ by
requiring $h_n^{\epsilon}$ to be constant on stable
manifolds.
First define
$$
h_n^{\epsilon} (\bar{p}):=\max \{0, 1-d_{\beta_1}(\bar{p},\mbox{supp} (g_n^{\epsilon} )) \beta_1^{-\sqrt{\kappa_n}}\}
$$
on $\overline{\Delta}$ and then extend so that it is constant on local stable manifolds and hence is a function on $\Delta$.
In particular $h_n^{\epsilon}$ has support in points such that $d_{\beta_1} ( p, \mbox{supp} (g_n^{\epsilon} ))\le \beta_1^{\sqrt{\kappa_n}}$ and
$\|h_n^{\epsilon}\|_{\beta_1}\le \beta_1^{\sqrt{\kappa_n }}$ by~\cite[Section 2.1]{Stein}.
By $(b)$ and $(c)$ if $z\in W^{u} (p)$ and $d_{\beta_1} (p,z)< \beta_1^{\sqrt{\kappa_n }}$ then $d(F^j p, F^j z)\le 2K \
\beta_1^{\sqrt{\kappa_n}-L_n}$ for all $j\le L_n$.
Hence if $d_{\beta_1} (z,\mbox{supp} (g_n^{\epsilon} ))\le \beta_1^{\sqrt{\kappa_n}}$ then there exists $p\in \mbox{supp} (g_n^{\epsilon} )$ such that $d(F^j p, F^j z)\le 2K \beta_1^{\sqrt{\kappa_n}-L_n}$ for all $j\le L_n$ and
hence
\[
| \sum_{j=0}^{L_n} [\varphi \circ F^j (z) - \varphi \circ F^j (p)]|\le C L_n \beta_1^{\sqrt{\kappa_n}-L_n }\le \frac{\epsilon}{2}
\]
for sufficiently large $n$. This implies that $\nu_{\Delta} (g_n^{\epsilon} )\le \nu_{\Delta} (h_n^{\epsilon}) \le \nu_{\Delta} (A_n (\frac{\epsilon}{2}))$.
As $h_n^{\epsilon} $ Lipschitz in the $d_{\beta_1}$ metric we obtain by Proposition~6.3
\begin{eqnarray*}
\nu_{\Delta} (E_n^{j} (\epsilon) )
& \le & \int_{\Delta} (G_n^{j} (\epsilon))\,d\nu_{\Delta} +j\nu_{\Delta} (D(\ln^{k}( n)^c)\\
&\le &\int_{\Delta} (g_n^{\epsilon} \cdot G_n^{j-1} \circ F^{\kappa_n} )\,d\nu_{\Delta} + Cn {(\beta_1^{'})} ^{\kappa_n/2} \\
&\le &\int h_n^{\epsilon}\,d\nu_{\Delta} \int G_n^{j-1} (\epsilon)\, d\nu_{\Delta}
+c_3 \beta_3^{\kappa_n} \|h_n^{\epsilon}\|_{\beta_1} \|G_n^{j-1} (\epsilon)\|_{\infty} +Cj {(\beta_1^{'})}^{\kappa_n/2} \\
&\le& \nu_{\Delta}(A_n (\frac{\epsilon}{2})) \nu_{\Delta} (G_n^{j-1}(\epsilon) )+c_3 \beta_3^{\kappa_n}\beta_1^{-\sqrt{\kappa_n}} +Cj {(\beta_1^{'})} ^{\kappa_n/2}.
\end{eqnarray*}
Iterating this estimate yields
$$
\nu_{\Delta} (E_n^0 (\epsilon) )
\le \nu_{\Delta} ( A_n (\frac{\epsilon}{2}) )^{[n/\kappa_n]} +n c_3 \beta_3^{\kappa_n} \beta_1^{-\sqrt{\kappa_n}}
+n^2C\beta_1^{\kappa_n/2}.
$$
The terms $n c_3 \beta_3^{\kappa_n} \beta_1^{-\sqrt{\kappa_n}}$ and $n^2C\beta_1^{\kappa_n}$ are summable if we take
$\kappa >3$ in the definition of $\kappa_n$.
In order to verify summability of the
$\nu_{\Delta} ( A_n (\frac{\epsilon}{2} ))^{[n/\kappa_n]}$ term
we proceed as in the proof of Proposition~\ref{prop:erdos1}
using large deviations.
By the existence of a rate function we obtain
$\nu_{\Delta}((A_n (\frac{\epsilon}{2}))^c)\ge e^{-L_n(I(\alpha-\frac{\epsilon}{2})+\delta_1)}$ for some $0<\delta_1$
and hence $1-\nu_{\Delta} (A_n (\frac{\epsilon}{2}))\ge e^{-L_n(I(\alpha-\frac{\epsilon}{2})+\delta_1)}$
for some $0<\delta_1$.
Hence $\nu_{\Delta} (A_n (\frac{\epsilon}{2}))\le 1-n^{-\rho}$ where
$\rho=\frac{I(\alpha-\frac{\epsilon}{2})}{I(\alpha)}+\delta_1$ is less than $1$ for $\delta_1>0$ small enough.
The principal term can be bounded by
$$
\nu_{\Delta}( A_n (\frac{\epsilon}{2}))^{[n/\kappa_n]}
\le (1-n^{-\rho})^{[n/\kappa_n]}
$$
which is also summable over $n$.
Hence by Borel-Cantelli we conclude that the set $\{ B_n(\epsilon) \mbox{ i.o.}\}$ has
measure zero. This concludes the proof.
\end{proof}
\section{Erd\"os-R\'enyi laws for Young Towers with polynomial tails.}\label{sec:intro}
We now consider Young Towers with polynomial tails in the sense that $\nu_{\Delta} (R>n) \le Cn^{-\beta}$.
\subsection{Upper bounds.}
We first prove a general result. We suppose that $(T, M, \mu)$ is an ergodic dynamical
system and $\varphi: M \to \mathbb R$ is a bounded observable. We assume also
$$
\mu \!\left(\left|\frac{1}{n}S_n (\varphi)-\bar\varphi\right| > \epsilon\right)\le C(\epsilon) n^{-\beta}.
$$
\begin{thm}\label{poly1} Assume that $\bar\varphi=\mu(\varphi)=0$, $\varphi$ is bounded and for every $\epsilon>0$ there exists a constant $C(\epsilon)>0$ and
$\beta>1$ so that
$$
\mu \!\left(\left|\frac{1}{n}S_n (\varphi)\right| > \epsilon\right)\le C (\epsilon) n^{-\beta}.
$$
Then if $\tau>\frac{1}{\beta}$ for $\mu$ a.e. $x\in M$,
\[
\lim_{n\to\infty} \max_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m(x) = 0.
\]
\end{thm}
\begin{proof} Choose $\tau>\frac{1}{\beta}$ and put $L_n=n^{\tau}$.
Let $\epsilon>0$ and define
\[
A_n:=\{ x\in X: \max_{0\le m\le n-L_n} |S_{L_n}\circ T^m | \ge L_n \epsilon\}.
\]
Then $\mu (A_n)\le n \mu (S_{L_n} \ge \epsilon L_n)\le c_1 (\delta) n^{1-\tau\beta }=c_1 n^{-\delta}$,
for some $c_1 >0$, where $\delta=\tau\beta -1$.
Let $p>\frac{1}{\delta}$ (i.e.\ $\delta p>1$) and consider the subsequence $n=k^{p}$.
Since $\sum_k\mu(A_{k^p})\le c_1\sum_kk^{-p\delta}<\infty$, we obtain via the Borel-Cantelli lemma
that for $\mu$ a.e. $x \in X$
\[
\limsup_{k\to\infty} \max_{0\le m\le k^p-L_{k^p}} L_{k^p}^{-1}|S_{L_{k^p}}\circ T^m| \le \epsilon.
\]
To fill the gaps use that $k^p-(k-1)^p=O(k^{p-1})$ and we obtain (as $\varphi$ is bounded) that
\[
\frac{S_{L_{k^p}}\circ T^m }{L_{k^p}}=\frac{S_{L_{(k-1)^p}}\circ T^m}{L_{k^p}}+\mathcal{O}\!\left(\frac{1}{k}\right)
\]
where the implied constant is uniform in $x\in X$as $\varphi$ is bounded.
As $\lim_{k\to \infty}\frac{k^p}{(k-1)^{p}}=1$ we conclude
\[
\lim_{k\to \infty} \frac{|S_{L_{k^{p}}}|}{L_{k^{p}}}=\lim_{k\to \infty} \frac{|S_{L_{(k-1)^{p}}}|}{L_{k^p}}.
\]
Since any $n\in \mathbb N$ satisfies $(k-1)^p \le n\le k^p$ for some $k$ and $\varphi$ is bounded, it follows that
\[
\limsup_{n\to \infty} \max_{0\le m\le n-L_{n}} |S_{L_{n}}\circ T^m |/L_{n} \le \epsilon.
\]
As $\epsilon$ was arbitrary this gives the upper bound.
\end{proof}
\subsection{Lower bounds.}
Now we suppose there exists $\gamma \ge \beta$, an observable $\varphi$ and an $\alpha>0$ such that for all $n$,
$\mu \!\left(\left|\frac{1}{n}S_n (\varphi)-\bar\varphi\right| > \alpha \right)\ge C(\alpha) n^{-\gamma}$.
We show if we take a window of length $n^{\tau}$, $\tau < \frac{1}{1+\frac{\beta+1}{\beta}\gamma}$ then the time-averaged
fluctuation persists almost surely. In the case that $\gamma$ limits to $\beta$ then we require $\tau < \frac{1}{2+\beta}$. Comparing
Theorem~\ref{poly1} and Theorem~\ref{poly2} there is a gap $\frac{1}{1+\frac{\beta+1}{\beta}\gamma}<\tau <\frac{1}{\beta}$
for which we don't know the almost sure limit of windows of length $n^{\tau}$. In Example~\ref{example1} we show that $\tau<\frac{1}{\beta+1}$
is required to ensure that a time-averaged
fluctuation persists almost surely.
\begin{thm}\label{poly2}
Suppose that $(T,M,\mu)$ is modeled by a Young Tower and $\bar{\nu}_{\Delta} (R>n)\le C n^{-\beta }$.
Suppose that $\gamma \ge \beta$ and there exists a function $C$ which is continuous on a neighborhood of $\alpha>0$ such that
\[
\mu \!\left(\left|\frac{1}{n}S_n (\varphi)-\bar\varphi\right| > \alpha \right)\ge C(\alpha) n^{-\gamma}
\]
Then if $0<\tau< \frac{1}{1+ \gamma\frac{\beta+1}{\beta}}$ for $\mu$ a.e. $x\in M$
\[
\lim_{n\to\infty} \max_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m(x) \ge \alpha
\]
\end{thm}
\begin{proof}
Let $0<\epsilon \ll \alpha$ and put
\[
A_{n^{\tau}} (\epsilon) =\{ (x,j): \sum_{r=1}^{n^{\tau}} \varphi \circ F^r (x,j) \le \alpha -\epsilon \}.
\]
Since $\varphi$ is Lipschitz continuous with Lipschitz constant $L$, then if $y\in A_{n^\tau}(\epsilon)$
and $d(y,y')<\frac{\epsilon}{2Ln^\tau}$, then $y'\in A_{n^\tau}(\epsilon/2)$. Hence let us choose $n_1$
so that $K\beta_1^{n_1}<\frac\epsilon{2Ln^\tau}$ and define
$$
B_{n^{\tau} }(\epsilon)
=\{ (x,0)\in\Lambda: \exists\:0\le j< R(f^{n_1}x )~\mbox{ with } (f^{n_1}x,j) \in A_{n^{\tau}} (\epsilon)\}
=f^{-n_1}(\pi A_{n^\tau}(\epsilon)),
$$
where $\pi:\Delta\to\Lambda$ is the projection given by $\pi((x,j))=(x,0)$ ($j<R(x)$).
The choice of the integer achieves that if $(x,0) \in B_{n^{\tau} }(\epsilon)$ and $(x',0)\in W^s (x,0)$ then
$(f^{n_1} x' ,0) \in\pi( A_{n^{\tau}} (\frac{\epsilon}{2}))$. This is a consequence of Assumption (a).
By assumption
\[
\nu_{\Delta} (A_{n^{\tau}} (\epsilon) )\ge C (\alpha-\epsilon) n^{-\gamma \tau}.
\]
For $\delta > \frac{\tau \gamma}{\beta}$ we have
\[
\nu_{\Delta} ( R> n^{\delta})=o(n^{-\delta})
\]
as by assumption $ \nu_{\Delta} ( R> \ell)\le C\ell^{-\beta}$.
Since $\nu_{\Delta} =\bar{\nu}\times \mbox{ (counting measure) }$ we get for $D\subset\Delta$
$$
\bar\nu(\pi(D))
\ge \frac{\nu_\Delta(D)-\nu_\Delta(R>n^\delta)}{n^\delta}.
$$
Consequently
$$
\bar{\nu} (\pi(A_{n^{\tau}} (\epsilon)))
\ge \left(C(\alpha-\epsilon) n^{-\tau\gamma}-o(n^{-\delta\beta})\!\right)n^{-\delta}
$$
and since $\delta\beta>\tau\gamma$ the first term dominates and we obtain
$$
\bar{\nu} (\pi(A_{n^\tau} (\epsilon)))
\ge c_1n^{-\tau\gamma-\delta}
$$
for some $c_1>0$ and since $f^{n_1}$ preserves $\bar{\nu}$,
$$
\bar{\nu} (B_{n^{\tau}} (\epsilon))\ge c_1n^{-\tau\gamma-\delta}.
$$
We can now define
$$
\tilde{B}_{n^\tau}(\epsilon)=\bigcup_{x\in B_{n^\tau}(\epsilon)}W^s(x)
$$
which by choice of $n_1$ implies that
$$
\tilde{B}_{n^\tau}(\epsilon)\subset B_{n^\tau}(\epsilon/2).
$$
We now approximate $1_{B_{n^{\tau}}} (\epsilon)$ by a function $h_{n^{\tau}} (\epsilon)$
which has Lipschitz constant $\beta_1^{-n^\tau}$ in the $d_{\beta_1}$-norm, that is we define
\[
h_{n^{\tau}} (\epsilon) (p) = \max (0,1- d(p, B_{n^{\tau}} (\epsilon) \beta_1^{-n^{\tau}})
\]
where we write $p$ for $(p,0)$. We can choose $n_1$ to be much smaller than $n^\tau$
and therefore, since by Assumption (b) and (c) if $d(p, B_{n^{\tau}} (\epsilon))< \beta_1^{\tau_n}$ then
$d(f^{n_1}p,B_{n^\tau}(\epsilon))<K\beta_1^{n^\tau-n_1}<\frac\epsilon{2Ln^\tau}$ which
implies that the support of $h_{n^{\tau}} (\epsilon)$ is contained in $B_{n^{\tau}} (\epsilon/2)$.
Now we let $\tau_1>\tau$ but $\tau_1-\tau < 1- (\tau\gamma\frac{\beta+1}{\beta} +\tau)$ and consider
\[
G_n(\epsilon)=\bigcap_{m=0}^{[n/n^{\tau_1}]}f^{-mn^{\tau_1}}B_{n^{\tau}} (\epsilon)
\]
We will show that
\[
\sum_n \bar{\nu} ( G_n(\epsilon)) <\infty
\]
Now
\begin{eqnarray*}
\bar{\nu} (G_n(\epsilon))
&\le& \bar{\nu} \!\left(\prod_{m=0}^{n^{1-\tau_1}} h_{n^{\tau}} (\epsilon) \circ f^{mn^{\tau_1}}\!\right)\\
& \le &\bar{\nu} (h_{n^{\tau}} (\epsilon)) \bar{\nu}(G_{n-1}(\epsilon))
+c_3\|h_{n^{\tau}} (\epsilon)\|_{\beta_1}\| |G_{n-1} (\epsilon) |_{\infty}\beta_3^{n^{\tau_1}}\\
& \le& [\bar{\nu}(h_{n^{\tau} }(\epsilon))]^{n^{1-\tau_1}}+ nC_3\beta_3^{n^{\tau_1}}\beta_1^{-n^{\tau}}
\end{eqnarray*}
The term $nC_3\beta_3^{n^{\tau_1}}\beta_1^{-n^{\tau}}$ is summable in $n$ as $\tau_1>\tau$. The principal term is estimated by
$$
[\bar{\nu}(h_{n^{\tau} }(\epsilon))]^{n^{1-\tau_1}}
\le \left(1- C(\alpha-\frac{\epsilon}{2}) n^{-\gamma\tau-\delta}\!\right)^{n^{1-\tau_1}}
\le \exp\!\left(-C(\alpha-\epsilon/2)n^{1-\tau_1-\gamma\tau-\delta}\!\right)
$$
Since $\tau_1>\tau$ can be chosen arbitrarily close to $\tau$ and $\delta>\frac{\tau\gamma}\beta$
can be chosen to achieve the power $1-\tau-\tau\gamma-\delta$ is positive for any chosen
$\tau<(1+\gamma\frac{\beta+1}\beta)^{-1}$ we obtain that the principal terms are summable which
implies summability of $\bar\nu(G_n(\epsilon)$.
Now define
$$
E_n:=\{ (x,0) : \mbox{for all } j< n : \sum_{r=0}^{n^{\tau} }\varphi (F^{R_{n_1}(x)+j +r} x,0)\le (\alpha-\frac{\epsilon}{2}) n^{\tau} \},
$$
where $R_\ell =\sum_{i=0}^{\ell-1}R\circ f^i$ is the $\ell$-th ergodic sum of $R$.
As $E_n (\epsilon) \subset G_n (\epsilon)$, $\bar\nu(G_{n} (\epsilon))$ summable implies that
$\sum_{n=1}^{\infty} \bar{\nu} (E_n (\epsilon))<\infty$.
By Birkhoff's ergodic theorem
\[
\lim_{n\to \infty} \frac{R_n (x,j)}{n}=\bar{R}=\frac1{\nu_{\Delta} (\Lambda)}
\]
for $\nu_{\Delta}$ a.e. $(x,j)\in \Delta$, and so the theorem follows.
\end{proof}
\begin{examp}\label{example1}
The condition $\tau < \frac{1}{1+ \gamma\frac{\beta+1}{\beta}}$ is close to optimal in that, taking $\gamma=\beta$, we require
$\tau <\frac{1}{2+\beta}$. We may construct a Young Tower and observable $\varphi$, $\int_{\Delta} \varphi \,d\nu_{\Delta}=0$ and $\alpha>0$ such that
$\nu_{\Delta} (S_{n_{\tau} } \varphi (x,j) \ge n^{\tau} \alpha) \le C n^{-\tau \beta}$, yet for all $\tau >\frac{1}{\beta+1}$,
\[
\lim_{n\to\infty} \max_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m =0
\]
We sketch the main idea of the tower and observable and make a couple of technical adjustments to ensure the tower is mixing and that the
observable is not a coboundary. The construction is based on that of~\cite{Bryc}.
The base partition consists of disjoint
intervals $\Lambda_i$ of length $i^{-\beta-2}$ and height $2i$. Above the base element $\Lambda_i$ the levels of the tower consist of
$\{ (x,j): 0\le j \le 2i-1\}$. We define $\varphi$ on the Tower by, if $x\in \Lambda_i$,
\[ \varphi (x,j) = \left\{ \begin{array}{ll}
-1 & \mbox{if $0\le j < i$};\\
1 & \mbox{if $i \le j < 2i$}.\end{array} \right. \]
Clearly $\nu_{\Delta} (\varphi)=0$.
Let $0< \alpha< 1$. Note that $S_{n_{\tau} } \varphi (x,j) \ge n^{\tau} \alpha$ only if $(x,j) \in (R>n^{\tau})$ and in fact $\nu_{\Delta} (S_{n_{\tau} } \varphi (x,j) \ge n^{\tau} \alpha)
\ge C \nu_{\Delta} (R>2n^{\tau} )=\sum_{r=2
n^{\tau}}^{\infty} (2j)j^{-2-\beta}\le C n^{-\tau \beta}$.
However if $\tau>\frac{1}{\beta+1}$ then $\sum_{j\ge n^{\tau}} \bar{\nu} (\Lambda_j)\le \sum_{n=1}^{\infty} n^{-\tau(\beta+1)}<\infty$. Hence by the Borel-Cantelli lemma
$f^n (x, 0)\in \bigcup_{j>n^{\tau} } \Lambda_j$ only finitely many times for $\bar{\nu}$ a.e. $(x,0)$. This implies that for $\bar{\nu}$ a.e. $(x,0)$ there exists an $N(x)$ such that for
all $n \ge N(x)$
$$
\mbox{for all } j< n : \sum_{r=0}^{n^{\tau}} \varphi (f^{j+r} x,0)< \alpha n^{\tau}.
$$
Hence for $\mu$ a.e. $x\in M$
\[
\lim_{n\to\infty} \max_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m < \alpha
\]
for every $\alpha>0$.
The same argument shows for $\nu_{\Delta}$ a.e. $(x,j)$
\[
\lim_{n\to\infty} \max_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m =0
\]
and
\[
\lim_{n\to\infty} \min_{0\le m\le n-n^{\tau}} n^{-\tau}S_{n^{\tau}}\circ T^m =0
\]
The heights of the levels in the tower above are all multiples of $2$.
Furthermore the observable $\varphi$ is a coboundary. If we define
\[
\psi (x, j) = \begin{cases}j
& \mbox{ if } x\in \Lambda_{k},0\le j \le k\\
2k-j &\mbox{ if } x\in \Lambda_{k}, k < j \le 2k-1\end{cases}.
\]
It is easy to check that
\[
\varphi=\psi\circ F -\psi
\]
We will modify the tower and the observable so that the greatest common denominator of the
return time function $R$ is $1$ (to ensure the tower is mixing) and that the new observable is not a coboundary. We change $\Lambda_3$ to have height $3$. This entails that the
tower is mixing. On the levels above $\Lambda_3$ we modify $\varphi$ to $\varphi_1$
so that $\varphi_1 (x,j)=\kappa>0$, $j=0,1,2$, $x\in \Lambda_3$ where $\kappa>0$ is small but $\varphi_1=\varphi$ elsewhere . This entails $r_1:=\nu_{\Delta} (\varphi_1)=\kappa \nu_{\Delta}(\Lambda_3) >0$. We subtract $r_1/(\nu_{\Delta}(\Lambda_2))$
from the value of $\varphi_1$ on $\Lambda_2$ to form a new observable $\varphi_2$ such that $\nu_{\Delta} (\varphi_2)=0$.
Since $F^3$ has a fixed point $p$ on $\Lambda_3$ and since $\sum_{j=0}^2 \varphi_2(x,j)\not = 0$ we conclude $\varphi_2$ is not a coboundary (by the Li\v{v}sic theorem~\cite{Nicol_Scott}). The new tower with observable
$\varphi_2$ we defined
has the properties of the former pertinent to our example.
\end{examp} | {"config": "arxiv", "file": "2103.00645.tex"} |
TITLE: Why does ebonite rod gets negatively charged when rubbed with fur
QUESTION [0 upvotes]: What I think is : The protons are present at the centre of the atom with rotating electrons around it so when it is rubbed by fur the electrons get passed from the ebonite rod to the fur leaving the rod negatively charged. But in reality the rod is negatively charged Why ?
REPLY [1 votes]: When an ebonite rod is rubbed with fur the rod gets negatively charged because the friction between the fur and the rod makes the ebonite rod to gain electron from the fur. This is because, electrons in fur are less tightly bound than the electrons in the ebonite rod
. | {"set_name": "stack_exchange", "score": 0, "question_id": 96390} |
TITLE: Comparing the password strength of random characters to random words.
QUESTION [0 upvotes]: Passwords with any ASCII printable character and passwords containing only words in the English dictionary are attacked equally using a guessing program that cycles between random words and random characters with a set minimum and maximum for each case. For simplicity we'll say the same algorithms used to guess are also used to generate the passwords.
The pass-phrase will be some combination of three to five words chosen randomly from one-million (about the number of words in the English language).
$10^{6\cdot3} + 10^{6\cdot4} + 10^{6\cdot5}$ Combinations
After a pass-phrase is guessed a string is generated using a minimum of four characters chosen randomly from ninety-five potential letters, digits, and symbols. $95^4 + 95^5 + 95^6 + ...$
What maximum is needed for your random character password to be more secure against the guessing program than three to five random words ?
REPLY [0 votes]: Sixteen
$95^4 + 95^5 + 95^6 + ... + 95^{16}\approx4.45\cdot10^{31}$
$10^{6\cdot3} + 10^{6\cdot4} + 10^{6\cdot5}\approx10^{30}$ | {"set_name": "stack_exchange", "score": 0, "question_id": 1370054} |
TITLE: Relaxing continuous differentiability in the inverse function theorem
QUESTION [1 upvotes]: Let $f$ be a continuous injective function on a ball centered at the origin of $\bf R^n$ into $\bf R^n$.
Suppose that $f$ is differentiable at the origin with non-zero Jacobian determinant.
Denote the inverse of $f$ by $g$. Must $g^\prime(0)$ exist and equal the inverse of the matrix $f^\prime(0)$?
I know this is true if $n = 1$.
REPLY [0 votes]: First note that by reducing the size of the ball and closing it, we obtain a compact set on which $f$ is continuous and injective.
It follows that $f$ is a homeomorphism on this smaller closed ball.
Thus, $f$ is a homeomorphism on the smaller open ball, so we may as well assume that $f$ is a homeomorphism on the original ball.
WLOG we can also assume that $f(0) = 0$.
Since $f^\prime(0)$ exists, we then have $f(x) = f^\prime(0) x + o(\|x\|)$.
Using the homeomorphic properties of $f$ and the continuity of linear maps, we see that $f^\prime(0)^{-1} y = g(y) + o(\|y\|)$.
It follows that $g^\prime(0) = f^\prime(0)^{-1}$. | {"set_name": "stack_exchange", "score": 1, "question_id": 3842873} |
TITLE: why when $X=qq^T \in S^n_+$,then $tr(XY) \lt 0$,it means that $Y \notin \mathbf K^*$?
QUESTION [1 upvotes]: $\mathbf K = S^n_+$ (nxn real symmetric positive semidefinite matrix). Show that the
dual of $\mathbf K$
$\mathbf K^*=\{\mathbf Y | Tr(\mathbf X \mathbf Y) >0, \forall \mathbf X \ge \mathbf 0\}$
is self-dual, i.e., $\mathbf K^* = \mathbf K$.
Hint :Prove $\mathbf K^* \subseteq S^n_+$ and $S^n_+ \subseteq \mathbf K^*$
Here is parts of its solution
$1.$Prove $\mathbf K^* \subseteq S^n_+$
let $Y \in \mathbf K^*$,we need to prove $Y \in S^n_+ $
$1-1.$ Suppose $ Y \notin S^n_+$,there exists $q \in R^n$ so that $q^TYq=Tr(Yqq^T) \le 0$
$1-2.$ Let $X=qq^T \in S^n_+$,then $tr(XY) \lt 0$,it means that $Y \notin \mathbf K^*$ (Contradiction) ,so $Y \in S^n_+$
I have three problem in here
1.Why can we prove $\mathbf K^* = \mathbf K$ by proving $\mathbf K^* \subseteq S^n_+$ and $S^n_+ \subseteq \mathbf K^*$?
2.i don't understand the logic about the $1-1$ and $1-2$,why when $ Y \notin S^n_+$,then $q^TYq \le 0$?why when $X=qq^T \in S^n_+$,then $tr(XY) \lt 0$,it means that $Y \notin \mathbf K^*$?
3.Why is $X=qq^T \in S^n_+$,then $tr(XY) \lt 0$,shouldn't it be $tr(XY) \gt 0$ ?
4.Why can $X=qq^T \in S^n_+$ prove $Y \notin \mathbf K^*$?
5.Why can $Y \notin \mathbf K^*$ prove $\mathbf K^* \in S_+^n$ ?
Can anyone explain it to me?
REPLY [1 votes]: In general, if two sets $A$ and $B$ satisfy $A \subseteq B$ and $B \subseteq A$, then $A = B$.
If this isn't clear, notice that $A \subseteq B$ means that every element in $A$ is also an element of $B$. Also, $B \subseteq A$ means that every element of $B$ is also an element of $A$. Hence, if we have both $A \subseteq B$ and $B \subseteq A$, then the sets $A$ and $B$ have the same elements, and thus, are equal.
Hence, $K^* \subseteq S^n_+$ and $S^n_+ \subseteq K^*$ means that $K^* = S^n_+$, i.e. $K^* = K$.
Using the property $\text{tr}(PQ) = \text{tr}(QP)$ for matrices of "compatible" sizes, we have $0 > \text{tr}(XY) = \text{tr}(qq^TY) = \text{tr}(q^TYq) = q^TYq$.
Since $q^TYq < 0$ for some vector $q$, we have that $Y$ is not a positive semidefinite matrix, i.e. $Y \not\in S^n_+ = K^*$. | {"set_name": "stack_exchange", "score": 1, "question_id": 3019562} |
\begin{document}
\title{Optimal Control of the Laplace-Beltrami operator on compact surfaces -- concept and numerical treatment}
\author{Michael Hinze and Morten Vierling}
\Nr{2011-1}
\date{January 2011}
\maketitle
\newpage
\begin{center}
{\bf \LARGE }
\end{center}
\pagenumbering{arabic}
{\small {\bf Abstract:}
We consider optimal control problems of elliptic PDEs on hypersurfaces $\Gamma$ in $\R^n$ for $n=2,3$. The leading part of the PDE is given by the Laplace-Beltrami operator, which is discretized by finite elements on a polyhedral approximation of $\Gamma$. The discrete optimal control problem is formulated on the approximating surface and is solved numerically with a semi-smooth Newton algorithm. We derive optimal a priori error estimates for problems including control constraints and provide numerical examples confirming our analytical findings.
} \\[2mm]
{\small {\bf Mathematics Subject Classification (2010): 58J32 , 49J20, 49M15} } \\[2mm]
{\small {\bf Keywords:} Elliptic optimal control problem, Laplace-Beltrami operator, surfaces, control
constraints, error estimates,semi-smooth Newton method.}
\section{Introduction}\label{S:Intro}
We are interested in the numerical treatment of the following linear-quadratic optimal control problem on a $n$-dimensional, sufficiently smooth hypersurface $\Gamma\subset \R^{n+1}$, $n=1,2$.
\begin{equation}\label{E:OptProbl}
\begin{split}
\min_{u\in\L2,\,y\in \Hone} J(u,y)&=\frac{1}{2}\|y-z\|_\L2^2+\frac{\alpha}{2}\|u\|_\L2^2\\
\text{subject to }\quad& u\in U_{ad} \textup{ and }\\
& \mint{\Gamma}{\nabla_\Gamma y \nabla_\Gamma \varphi+\mathbf cy\varphi}{\Gamma} =\mint{\Gamma}{u\varphi}{\Gamma}\,,\forall\varphi\in\Hone
\end{split}
\end{equation}
with $U_{ad}=\set{v\in \L2}{a\le v\le b}$, $a<b\in\R$ . For simplicity we will assume $\Gamma$ to be compact and $\mathbf c=1$. In section \ref{Ciszero} we briefly investigate the case $\mathbf c=0$, in section \ref{S:Examples} we give an example on a surface with boundary.
Problem \eqref{E:OptProbl} may serve as a mathematical model for the optimal distribution of surfactants on a biomembrane $\Gamma$ with regard to achieving a prescribed desired concentration $z$ of a quantity $y$.
It follows by standard arguments that \eqref{E:OptProbl} admits a unique solution $u\in\Uad$ with unique associated state $y=y(u)\in\Htwo$.
Our numerical approach uses variational discretization applied to \eqref{E:OptProbl}, see \cite{Hinze2005} and \cite{HinzePinnauUlbrich2009}, on a discrete surface $\Gamma^h$ approximating $\Gamma$. The discretization of the state equation in \eqref{E:OptProbl} is achieved by the finite element method proposed in \cite{Dziuk1988}, where a priori error estimates for finite element approximations of the Poisson problem for the Laplace-Beltrami operator are provided. Let us mention that uniform estimates are presented in \cite{Demlow2009}, and steps towards a posteriori error control for elliptic PDEs on surfaces are taken by Demlow and Dziuk in \cite{DemlowDziuk2007}.
For alternative approaches for the discretization of the state equation by finite elements see the work of Burger \cite{Burger2008}. Finite element methods on moving surfaces are developed by Dziuk and Elliott in \cite{DziukElliott2007}.
To the best of the authors knowledge, the present paper contains the first attempt to treat optimal control problems on surfaces.
\vspace{1ex}
We assume that $\Gamma$ is of class $C^2$ with unit normal field $\nu$. As an embedded, compact hypersurface in $\mathbb R^{n+1}$ it is orientable and hence the zero level set of a signed distance function $|d(x)| = \textup{dist}(x,\Gamma)$. We assume w.l.o.g. $\nabla d(x)=\nu(x)$ for $x\in\Gamma$.
Further, there exists an neighborhood $\mathcal N\subset\R^{n+1}$ of $\Gamma$, such that $d$ is also of class $C^2$ on $\mathcal N$ and the projection
\begin{equation}\label{E:Projection}a:\mathcal N\rightarrow\Gamma\,,\quad a(x) = x-d(x)\nabla d(x)\end{equation}
is unique, see e.g. \cite[Lemma 14.16]{GilbargTrudinger1998}. Note that $\nabla d (x)=\nu(a(x))$.
Using $a$ we can extend any function $\phi:\Gamma\rightarrow\R$ to $\mathcal N$ as $\bar \phi(x) = \phi(a(x))$. This allows us to represent the surface gradient in global exterior coordinates $\nabla_\Gamma \phi=(I-\nu\nu^T)\nabla \bar\phi$, with the euclidean projection $(I-\nu\nu^T)$ onto the tangential space of $\Gamma$.
We use the Laplace-Beltrami operator $\Delta_\Gamma = \nabla_\Gamma\cdot\nabla_\Gamma$ in its weak form i.e. $\Delta_\Gamma:H^1(\Gamma)\rightarrow {H^1}(\Gamma)^{*}$
$$
y\mapsto - \mint{\Gamma}{\nabla_\Gamma y \nabla_\Gamma (\,\cdot\,)}{\Gamma}\in {H^1}(\Gamma)^*\,.
$$
Let $S$ denote the prolongated restricted solution operator of the state equation
\begin{equation*}
S:\L2\rightarrow \L2\,,\quad u\mapsto y\qquad-\Delta_\Gamma y + \mathbf c y=u\,,
\end{equation*}
which is compact and constitutes a linear homeomorphism onto $\Htwo$, see \cite[1. Theorem]{Dziuk1988}.
By standard arguments we get the following necessary (and here also sufficient) conditions for optimality of $u\in \Uad$
\begin{equation}\label{I:NecCond}
\langle \nabla_u J(u,y(u)), v-u\rangle_\L2=\langle \alpha u+ S^*(S u-z), v-u\rangle_\L2\ge 0\,\quad\forall v\in \Uad\,,
\end{equation}
We rewrite \eqref{I:NecCond} as
\begin{equation}\label{E:NecCond1}
u = \pr{\Uad}{-\frac{1}{\alpha} S^*(S u-z)}\,,
\end{equation}
where $\textup{P}_{U_{ad}}$ denotes the $L^2$-orthogonal projection onto $\Uad$.
\section{Discretization}
We now discretize \eqref{E:OptProbl} using an approximation $\Gamma^h$ to $\Gamma$ which is globally of class $C^{0,1}$. Following Dziuk, we consider polyhedral $\Gamma^h=\bigcup_{i\in I_h}T_h^i$ consisting of triangles $T_h^i$ with corners on $\Gamma$, whose maximum diameter is denoted by $h$. With FEM error bounds in mind we assume the family of triangulations $\Gamma^h$ to be regular in the usual sense that the angles of all triangles are bounded away from zero uniformly in $h$.
We assume for $\Gamma^h$ that $a(\Gamma^h)=\Gamma$, with $a$ from \eqref{E:Projection}. For small $h>0$ the projection $a$ also is injective on $\Gamma^h$. In order to compare functions defined on $\Gamma^h$ with functions on $\Gamma$ we use $a$ to lift a function $y\in \Ltwoh$ to $\Gamma$
$$
y^l(a(x))=y(x)\quad \forall x\in \Gamma^h\,,
$$
and for $y\in\L2$ and sufficiently small $h>0$ we define the inverse lift
$$
y_l(x)=y(a(x))\quad \forall x\in \Gamma^h\,.
$$
For small mesh parameters $h$ the lift operation $\ldown: \L2\rightarrow \Ltwoh$ defines a linear homeomorphism with inverse $\lup$. Moreover, there exists $\C>0$ such that
\begin{equation}\label{E:IntBnd}
1-\C h^2\le \|\ldown\|_{\mathcal L(\L2,\Ld)}^2,\|\lup\|_{\mathcal L(\Ld,\L2)}^2\le 1+ \C h^2\,,\end{equation}
as the following lemma shows.
\begin{LemmaAndDef}\label{L:Integration}
Denote by $\frac{\textup{d}\Gamma}{\textup{d}\Gamma^h}$ the Jacobian of $a|_{\Gamma^h}:\Gamma^h\rightarrow\Gamma$, i.e. $\frac{\textup{d}\Gamma}{\textup{d}\Gamma^h}=|\textup{det}(M)|$ where $M\in\R^{n\times n}$ represents the Derivative $\textup da(x):T_x\Gamma^h\rightarrow T_{a(x)}\Gamma$ with respect to arbitrary orthonormal bases of the respective tangential space. For small $h>0$ there holds
$$
\sup_\Gamma \left| 1-\frac{\textup{d}\Gamma}{\textup{d}\Gamma^h}\right|\le \C h^2\, ,
$$
Now let $\frac{\textup{d}\Gamma^h}{\textup{d}\Gamma}$ denote $|\textup{det}(M^{-1})|$, so that by the change of variable formula
$$
\left|\mint{\Gamma^h}{v_l}{\Gamma^h}-\mint{\Gamma}{v}{\Gamma}\right|=\left|\mint{\Gamma}{v\frac{\textup{d}\Gamma^h}{\textup{d}\Gamma}-v}{\Gamma}\right|\le \C h^2\|v\|_{L^1(\Gamma)}\,.
$$
\end{LemmaAndDef}
\begin{proof} see \cite[Lemma 5.1]{DziukElliott2007}\end{proof}
Problem \eqref{E:OptProbl} is approximated by the following sequence of optimal control problems
\begin{equation}\label{E:OptProblDiscr}
\begin{split}
\min_{u\in\Ld,\,y\in H^1(\Gamma^h)} J(u,y)&=\frac{1}{2}\|y-z_l\|_\Ld^2+\frac{\alpha}{2}\|u\|_{L^2(\Gamma^h)}^2\\
\text{subject to }\quad&u\in U_{ad} ^h\textup{ and }\\
&y= S_h u\,,
\end{split}
\end{equation}
with $U_{ad}^h=\set{v\in \Ld}{a\le v\le b}$, i.e. the mesh parameter $h$ enters into $\Uad$ only through $\Gamma^h$ . Problem \eqref{E:OptProblDiscr} may be regarded as the extension of variational discretization introduced in \cite{Hinze2005} to optimal control problems on surfaces.
In \cite{Dziuk1988} it is explained, how to implement a discrete solution operator $S_h:\Ld\rightarrow \Ld$, such that
\begin{equation}\label{E:FEconvergence}
\|\lup S_h\ldown- S\|_{\mathcal L(\L2,\L2)}\le \CF h^2\,,
\end{equation}
which we will use throughout this paper. See in partikular \cite[Equation (6)]{Dziuk1988} and \cite[7. Lemma]{Dziuk1988}.
For the convenience of the reader we briefly sketch the method. Consider the space $$V_h=\set{\varphi\in C^0\left(\Gamma^h\right)}{\forall i\in I_h:\:\varphi|_{T_h^i}\in\mathcal P^1(T_h^i)}\subset H^1(\Gamma^h)$$ of piecewise linear, globally continuous functions on $\Gamma^h$. For some $u\in\L2$, to compute $y_h^l = \lup S_h\ldown u$ solve
\begin{equation*}
\mint{\Gamma^h}{\nabla_{\Gamma^h}y_h\nabla_{\Gamma^h}\varphi_i+\mathbf cy_h\varphi_i}{\Gamma^h}=\mint{\Gamma^h}{u_l\varphi_i}{\Gamma^h}\,,\quad \forall \varphi\in V_h
\end{equation*}
for $y_h\in V_h$.
We choose $\Ld$ as control space, because in general we cannot evaluate $\mint{\Gamma}{v}{\Gamma}$ exactly, whereas the expression $\mint{\Gamma^h}{v_l}{\Gamma^h}$ for piecewise polynomials $v_l$ can be computed up to machine accuracy. Also, the operator $S_h$ is self-adjoint, while $(\lup S_h\ldown)^*=\ldown^*S_h\lup^*$ is not. The adjoint operators of $\ldown$ and $\lup$ have the shapes
\begin{equation}\label{E:AdjLift}
\forall v\in\Ld:\:(\ldown)^*v = \frac{\textup{d}\Gamma^h}{\textup{d}\Gamma}v^l\,,\quad\forall v\in\L2:\:(\lup)^*v = \frac{\textup{d}\Gamma}{\textup{d}\Gamma^h}v_l\,,
\end{equation}
hence evaluating $\ldown^*$ and $\lup^*$ requires knowledge of the Jacobians $\frac{\textup{d}\Gamma^h}{\textup{d}\Gamma}$ and $\frac{\textup{d}\Gamma}{\textup{d}\Gamma^h}$ which may not be known analytically.
Similar to \eqref{E:OptProbl}, problem \eqref{E:OptProblDiscr} possesses a unique solution $u_h\in\Uadh$ which satisfies
\begin{equation}\label{E:NecCond2_Discr}
u_h = \pr{U_{ad}^h}{-\frac{1}{\alpha}p_h(u_h)}\,.
\end{equation}
Here $ P_{U_{ad}^h}:\Ld\rightarrow U_{ad}^h$ is the $\Ld$-orthogonal projection onto $\Uadh$ and for $v\in \Ld$ the adjoint state is $p_h(v)= S_h^*(S_h v-z_l)\in H^1(\Gamma^h)$.
\vspace{1ex}
Observe that the projections $\textup{P}_\Uad$ and $\textup{P}_\Uadh$ coincide with the point-wise projection $\textup{P}_{[a,b]}$ on $\Gamma$ and $\Gamma^h$, respectively, and hence
\begin{equation}\label{E:ProjRel}
\left(\pr{\Uadh}{v_l}\right)^l=\pr{\Uad}{v}
\end{equation}
for any $v\in \L2$.
Let us now investigate the relation between the optimal control problems \eqref{E:OptProbl} and \eqref{E:OptProblDiscr}.
\begin{Theorem}[Order of Convergence]\label{T:Convergence}
Let $u\in\L2$, $u_h\in\Ld$ be the solutions of \eqref{E:OptProbl} and \eqref{E:OptProblDiscr}, respectively. Then for sufficiently small $ h>0$ there holds
\begin{equation}\label{E:ErrorEstimate}\begin{split}
\alpha \big\|u^l_h-u\big\|_\L2^2+\big\|y^l_h-y\big\|_\L2^2\le\frac{1+\C h^2}{1-\C h^2}\bigg(\frac{1}{\alpha}\left\|\left(\lup S_h^*\ldown- S^*\right)(y-z)\right\|_\L2^2&\dots\\
+\left\|\left(\lup S_h\ldown-S\right)u\right\|_\L2^2\bigg)&\,,
\end{split}\end{equation}
with $y=Su$ and $y_h=S_hu_h$.
\end{Theorem}
\begin{proof}
From \eqref{E:ProjRel} it follows that the projection of $-\left(\frac{1}{\alpha}p(u)\right)_l$ onto $U_{ad}^h$ is $u_l$
$$
u_l = \pr{U_{ad}^h}{-\frac{1}{\alpha}p(u)_l}\,,
$$
which we insert into the necessary condition of \eqref{E:OptProblDiscr}. This gives
$$
\langle \alpha u_h + p_h( u_h),u_l-u_h\rangle_\Ld\ge 0\,.
$$
On the other hand $u_l$ is the $\Ld$-orthogonal projection of $-\frac{1}{\alpha}p(u)_l$, thus
$$
\langle-\frac{1}{\alpha}p(u)_l-u_l,u_h-u_l\rangle_\Ld\le 0\,.
$$
Adding these inequalities yields
\begin{equation*}
\begin{split}
\alpha\|u_l- u_h\|^2_\Ld\le&\langle\left(p_h( u_h)-p(u)_l\right),u_l- u_h\rangle_\Ld\\
=&\langle p_h(u_h) -S_h^*(y-z)_l,u_l-u_h\rangle_\Ld + \langle S_h^*(y-z)_l-p(u)_l,u_l-u_h\rangle_\Ld\,.
\end{split}
\end{equation*}
The first addend is estimated via
\[\begin{split}
\langle p_h(u_h) -S_h^*(y-z)_l,u_l-u_h\rangle_\Ld &= \langle y_h-y_l,S_hu_l-y_h\rangle_\Ld\\
&=-\|y_h-y_l\|^2_\Ld +\langle y_h-y_l,S_hu_l-y_l\rangle_\Ld\\
&\le - \frac{1}{2}\|y_h-y_l\|^2_\Ld+ \frac{1}{2}\|S_hu_l-y_l\|^2_\Ld\,.
\end{split}\]
The second addend satisfies
\[
\langle S_h^*(y-z)_l-p(u)_l,u_l-u_h\rangle_\Ld\le \frac{\alpha}{2}\|u_l-u_h\|^2_\Ld+ \frac{1}{2\alpha}\|S_h^*(y-z)_l-p(u)_l\|_\Ld^2\,.
\]
Together this yields
\[
\alpha\|u_l- u_h\|^2_\Ld + \|y_h-y_l\|^2_\Ld\le \frac{1}{\alpha}\|S_h^*(y-z)_l-p(u)_l\|_\Ld^2 + \|S_hu_l-y_l\|^2_\Ld
\]
The claim follows using \eqref{E:IntBnd} for sufficiently small $h>0$.
\end{proof}
Because both $S$ and $S_h$ are self-adjoint, quadratic convergence follows directly from \eqref{E:ErrorEstimate}. For operators that are not self-adjoint one can use
\begin{equation}\label{E:FEadjointconvergence}
\|\ldown^* S_h^*\lup^*- S^*\|_{\mathcal L(\L2,\L2)}\le \CF h^2\,.
\end{equation}
which is a consequence of \eqref{E:FEconvergence}.
Equation \eqref{E:AdjLift} and Lemma \ref{L:Integration} imply
\begin{equation}\label{E:LiftAdjConvergence}
\|(\ldown)^*-\lup\|_{\mathcal L(\Ld,\L2)}\le\C h^2\,,\quad \|(\lup)^*-\ldown\|_{\mathcal L(\L2,\Ld)}\le\C h^2\,.
\end{equation}
Combine \eqref{E:ErrorEstimate} with \eqref{E:FEadjointconvergence} and \eqref{E:LiftAdjConvergence} to proof quadratic convergence for arbitrary linear elliptic state equations.
\section{Implementation}\label{S:Implementation}
In order to solve \eqref{E:NecCond2_Discr} numerically, we proceed as in \cite{Hinze2005} using the finite element techniques for PDEs on surfaces developed in \cite{Dziuk1988} combined with the semi-smooth Newton techniques from \cite{HintermuellerItoKunisch2003} and \cite{Ulbrich2003} applied to the equation
\begin{equation}\label{E:NecCondAlg}
G_h(u_h)=\left(
u_h - \pr{[a,b]}{-\frac{1}{\alpha} p_h(u_h)}\,.
\right)=0
\end{equation}
Since the operator $p_h$ continuously maps $v\in\Ld$ into $H^1(\Gamma^h)$, Equation \eqref{E:NecCondAlg} is semismooth and thus is amenable to a semismooth Newton method.
The generalized derivative of $G_h$ is given by
\begin{equation*}
DG_h(u)=\left(I + \frac{\indi{u,m}}{\alpha} S_h^*S_h
\right)\,,
\end{equation*}
where $\indi{}:\Gamma^h\rightarrow\{0,1\}$ denotes the indicator function of the inactive set $\mathcal I(-\frac{1}{\alpha}p_h(u))=\set{\gamma\in\Gamma^h}{a<-\frac{1}{\alpha}p_h(u)[\gamma]<b}$
$$
\indi{u,m}=\left\{\begin{array}{l} 1\textup{ on } \mathcal I(-\frac{1}{\alpha}p_h(u))\subset\Gamma^h\\
0 \textup{ elsewhere on }\Gamma^h\end{array}\right.\,,
$$
which we use both as a function and as the operator $\indi{u,m}:\Ld\rightarrow\Ld$ defined as the point-wise multiplication with the function $\indi{u,m}$.
A step semi-smooth Newton method for \eqref{E:NecCondAlg} then reads
\[
\left( I + \frac{\indi{u,m}}{\alpha}S_h^*S_h \right) u^+ =-G_h(u)+DG_h(u) u =\pr{[a,b]}{-\frac{1}{\alpha} p_h(u)}+ \frac{\indi{u,m}}{\alpha} S_h^*S_hu\,.
\]
Given $u$ the next iterate $u^+$ is computed by performing three steps
\begin{enumerate}
\item Set $(\left(1-\indi{u}\right)u^+)[\gamma]=\left((1-\indi{u}) \pr{[a,b]}{-\frac{1}{\alpha}p_h(u)+m}\right)[\gamma]$, which is either $a$ or $b$, depending on $\gamma\in\Gamma_h$.
\item Solve
\begin{equation*}
\left(I + \frac{\indi{u}}{\alpha} S_h^*S_h \right)\indi{u}u^+={\frac{\indi{u}}{\alpha}\Big(S_h^*z_l-S_h^*S_h\left(1-\indi{u}\right)u^+\Big)}
\end{equation*}for $\indi{u}u^+$ by CG iteration over $L^2(\mathcal I(-\frac{1}{\alpha}p_h(u))$.
\item Set $u^+=\indi{u}u^++(1-\indi{u})u^+\,.$
\end{enumerate}
Details can be found in \cite{HinzeVierling2010} .
\section{The case $\mathbf c=0$}\label{Ciszero}
In this section we investigate the case $\mathbf c=0$ which corresponds to a stationary, purely diffusion driven process. Since $\Gamma$ has no boundary, in this case total mass must be conserved, i.e. the state equation admits a solution only for controls with mean value zero. For such a control the state is uniquely determined up to a constant.
Thus the admissible set $\Uad$ has to be changed to
$$U_{ad}=\set{v\in \L2}{a\le v\le b}\cap\Lz\,,\textup{ where }\Lz:=\set{v\in \L2}{\mint{\Gamma}{v}{\Gamma}=0}\,,$$
and $a<0<b$. Problem \eqref{E:OptProbl} then admits a unique solution $(u,y)$ and there holds $\mint{\Gamma}{y}{\Gamma}=\mint{\Gamma}{z}{\Gamma}$. W.l.o.g we assume $\mint{\Gamma}{z}{\Gamma}=0$ and therefore only need to consider states with mean value zero. The state equation now reads $y=\tilde Su$ with the solution operator $\tilde S :\Lz\rightarrow\Lz$ of the equation $-\Delta_\Gamma y=u$, $\mint{\Gamma}{y}{\Gamma}=0$.
Using the injection $\Lz\stackrel{\imath}{\rightarrow}\L2$, $\tilde S$ is prolongated as an operator $S:\L2\rightarrow\L2$ by $S=\imath \tilde S\imath^*$. The adjoint $\imath^*:\L2\rightarrow \Lz$ of $\imath$ is the $L^2$-orthogonal projection onto $\Lz$.
The unique solution of \eqref{E:OptProbl} is again characterized by \eqref{E:NecCond1}, where the orthogonal projection now takes the form
\begin{equation*}
\pr{\Uad}{v}= \Pab{v+m}\,
\end{equation*}
with $m\in\R$ chosen such that
\begin{equation*}
\mint{\Gamma}{ \Pab{v+m}}{\Gamma}=0\,.
\end{equation*}
If for $v\in\L2$ the inactive set $\mathcal I(v+m)=\set{\gamma\in\Gamma}{a<v[\gamma]+m<b}$ is non-empty, the constant m = m(v) is uniquely determined by $v\in\L2$. Hence, the solution $u\in\Uad$ satisfies
\begin{equation*}
u = \pr{[a,b]}{-\frac{1}{\alpha}p(u)+m\left(-\frac{1}{\alpha}p(u)\right)}\,,
\end{equation*}
with $p(u)= S^*(S u- \imath^*z)\in\Htwo$ denoting the adjoint state and $m(-\frac{1}{\alpha}p(u))\in\R$ is implicitly given by $\mint{\Gamma}{u}{\Gamma}=0$. Note that $\imath^*\imath$ is the identity on $\Lz$.
In \eqref{E:OptProblDiscr} we now replace $\Uadh$ by $U_{ad}^h=\set{v\in \Ld}{a\le v\le b}\cap\Lzd$. Similar as in \eqref{E:NecCond2_Discr}, the unique solution $u_h$ then satisfies
\begin{equation}\label{E:NecCond2_DiscrZero}
u_h = \pr{U_{ad}^h}{-\frac{1}{\alpha}p_h(u_h)}=\pr{[a,b]}{-\frac{1}{\alpha}p_h(u_h)+m_h\left(-\frac{1}{\alpha}p_h(u_h)\right)}\,,
\end{equation}
with $p_h(v_h)= S_h^*(S_h v_h-\imath^*_hz_l)\in H^1(\Gamma^h)$ and $m_h(-\frac{1}{\alpha}p_h(u_h))\in\R$ the unique constant such that $\mint{\Gamma^h}{u_h}{\Gamma^h}=0$. Note that $m_h\left(-\frac{1}{\alpha}p_h(u_h)\right)$ is semi-smooth with respect to $u_h$ and thus Equation \eqref{E:NecCond2_DiscrZero} is amenable to a semi-smooth Newton method.
The discretization error between the problems \eqref{E:OptProblDiscr} and \eqref{E:OptProbl} now decomposes into two components, one introduced by the discretization of $U_{ad}$ through the discretization of the surface, the other by discretization of $S$.
For the first error we need to investigate the relation between $\pr{\Uadh}{u}$ and $\pr{\Uad}{u}$, which is now slightly more involved than in \eqref{E:ProjRel}.
\begin{Lemma}\label{L:mConvergence}
Let $h>0$ be sufficiently small. There exists a constant $C_m>0$ depending only on $\Gamma$, $|a|$ and $|b|$ such that for all $v\in\L2$ with $\mint{\mathcal I(v+m(v))}{}{\Gamma}>0$ there holds
$$
|m_h(v_l)-m(v)|\le \frac{C_m}{\mint{\mathcal I(v+m(v))}{}{\Gamma}}h^2\,.
$$
\end{Lemma}
\begin{proof}
For $v\in\L2$, $\epsilon>0$ choose $\delta>0$ and $h>0$ so small that the set
$$\mathcal I_{v}^\delta=\set{\gamma\in\Gamma^h}{a+\delta\le v_l(\gamma)+m(v)\le b-\delta}\,.$$
satisfies $\mint{\mathcal I_{v}^\delta}{}{\Gamma^h}(1+\epsilon)\ge\mint{\mathcal I(v+m(v))}{}{\Gamma}$. It is easy to show that hence $m_h(v_l)$ is unique. Set
$C = \C \max(|a|,|b|)\mint{\Gamma}{}{\Gamma}$.
Decreasing $h$ further if necessary ensures
$$
\frac{Ch^2}{\mint{\mathcal I_{v}^\delta}{}{\Gamma^h}}\le (1+\epsilon)\frac{C h^2}{\mint{\mathcal I(v+m(v))}{}{\Gamma}} \le \delta\,.
$$
For $x\in\R$ let
$$
M^{h}_{v}(x) = \mint{\Gamma^h}{\Pab{v_l+x}}{\Gamma^h}\,.
$$
Since $\mint{\Gamma}{\Pab{v+m(v)}}{\Gamma}=0$, Lemma \ref{L:Integration} yields
$$|M_v^h(m(v))|\le \C \|\Pab{v+m(v)}\|_{L^1(\Gamma)} h^2\le C h^2\,.$$
Let us assume w.l.o.g. $-C h^2\le M_v^h(m(v))\le 0$. Then
$$
M_v^h\left(m(v)+\frac{Ch^2}{\mint{\mathcal I_{v}^\delta}{}{\Gamma^h}}\right)\ge M_v^h\left(m(v)\right)+ Ch^2 \ge 0
$$
implies $0\le m(v)-m_h(v)\le \frac{Ch^2}{\mint{\mathcal I_v^\delta}{}{\Gamma^h}}\le \frac{(1+\epsilon) C}{\mint{\mathcal I(v+m(v))}{}{\Gamma}}h^2$, since $M^h_v(x)$ is continuous with respect to $x$. This proves the claim.
\end{proof}
Because $$\left(\pr{\Uadh}{v_l}\right)^l-\pr{\Uad}{v}=\pr{[a,b]}{v+m_h(v_l)}-\pr{[a,b]}{v+m(v)}\,,$$ we get the following corollary.
\begin{Corollary}\label{C:ProjError}
Let $h>0$ be sufficiently small and $C_m$ as in Lemma \ref{L:mConvergence}. For any fixed $v\in\L2$ with $\mint{\mathcal I(v+m(v))}{}{\Gamma}>0$ we have
\begin{equation*}
\left\| \left(\pr{\Uadh}{v_l}\right)^l-\pr{\Uad}{v}\right\|_\L2\le C_m\frac{\sqrt{\mint{\Gamma}{}{\Gamma}}}{\mint{\mathcal I(v+m(v))}{}{\Gamma}}h^2\,.
\end{equation*}
\end{Corollary}
Note that since for $u\in\L2$ the adjoint $p(u)$ is a continuous function on $\Gamma$, the corollary is applicable for $v=-\frac{1}{\alpha}p(u)$.
The following theorem can be proofed along the lines of Theorem \ref{T:Convergence}.
\begin{Theorem}
Let $u\in\L2$, $u_h\in\Ld$ be the solutions of \eqref{E:OptProbl} and \eqref{E:OptProblDiscr}, respectively, in the case $\mathbf c=0$. Let $\tilde u_h=\left(\pr{\Uadh}{-\frac{1}{\alpha}p(u)_l}\right)^l$. Then there holds for $\epsilon>0$ and $0\le h< h_\epsilon$
\[
\begin{split}
\alpha\|u^l_h-\tilde u_h\|_\L2^2+\big\|y^l_h-y\big\|_\L2^2\le(1+\epsilon)\bigg(\frac{1}{\alpha}\left\|\left(\lup S_h^*\ldown- S^*\right)(y-z)\right\|_\L2^2&\dots\\
+\left\|\lup S_h\ldown\tilde u_h-y\right\|_\L2^2\bigg)&\,.
\end{split}
\]
\end{Theorem}
Using Corollary \ref{C:ProjError} we conclude from the theorem
\[\begin{split}
\|u^l_h-u\|_\L2\le& C\left(\frac{1}{\alpha}\left\|\bigg(\lup S_h^*\ldown- S^*\right)(y-z)\right\|_\L2 + \frac{1}{\sqrt{\alpha}}\left\|\left(\lup S_h\ldown-S\right)u\right\|_\L2\dots\\
&+\left(1+\frac{\|S\|_{\mathcal L(\L2,\L2)}}{\sqrt{\alpha}}\right) \frac{C_m\sqrt{\mint{\Gamma}{}{\Gamma}}\,h^2}{\mint{\mathcal I(-\frac{1}{\alpha}p(u)+m(-\frac{1}{\alpha}p(u)))}{}{\Gamma}}\bigg)\,,
\end{split}\]
the latter part of which is the error introduced by the discretization of $\Uad$.
Hence one has $h^2$-convergence of the optimal controls.
\section{Numerical Examples}\label{S:Examples}
The figures show some selected Newton steps $u^+$. Note that jumps of the color-coded function values are well observable along the border between active and inactive set. For all examples Newton's method is initialized with $u_0\equiv 0$.
The meshes are generated from a macro triangulation through congruent refinement, new nodes are projected onto the surface $\Gamma$. The maximal edge length $h$ in the triangulation is not exactly halved in each refinement, but up to an error of order $O(h^2)$. Therefore we just compute our estimated order of convergence (EOC) according to
\[
EOC_i = \frac{\ln \|u_{h_{i-1}}-u_l\|_{L^2(\Gamma^{h_{i-1}})}-\ln \|u_{h_i}-u_l\|_{L^2(\Gamma^{h_{i}})}}{\ln(2)}.
\]
For different refinement levels, the tables show $L^2$-errors, the corresponding EOC and the number of Newton iterations before the desired accuracy of $10^{-6}$ is reached.
It was shown in \cite{HintermuellerUlbrich2004}, under certain assumptions on the behaviour of $-\frac{1}{\alpha} p(u)$, that the undamped Newton Iteration is mesh-intdependent. These assumptions are met by all our examples, since the surface gradient of $-\frac{1}{\alpha} p(u)$ is bounded away from zero along the border of the inactive set. Moreover, the displayed number of Newton-Iterations suggests mesh-independence of the semi-smooth Newton method.
\begin{Example}[Sphere I]\label{Ex:Sphere_two}
We consider the problem
\begin{equation}\label{P:SphereRegular}
\min_{u\in\L2,\,y\in \Hone} J(u,y) \text{ subject to }
-\Delta_\Gamma y + y=u - r,\quad -1\le u\le1
\end{equation}
with $\Gamma$ the unit sphere in $\R^3$ and $\alpha = 1.5\cdot10^{-6}$. We choose $z=52\alpha x_3(x_1^2-x_2^2)$ , to obtain the solution
\[
\bar u = r = \min\big(1,\max\big(-1,4x_3(x_1^2-x_2^2)\big)\big)
\]
of \eqref{P:SphereRegular}.
\end{Example}
\begin{figure}
\center
\includegraphics[width =\textwidth]{alpha_3_regd_wide_high_crop.png}
\caption{Selected full Steps $u^+$ computed for Example \ref{Ex:Sphere_two} on the twice refined sphere.}
\end{figure}
\begin{table}
\begin{tabular}{r|cccccc}
reg. refs. & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
$L2$-error & 5.8925e-01 & 1.4299e-01 & 3.5120e-02 & 8.7123e-03 & 2.2057e-03 & 5.4855e-04\\
EOC & - & 2.0430 & 2.0255 & 2.0112 & 1.9818 & 2.0075\\
\# Steps & 6 & 6 & 6 & 6 & 6 & 6\\
\end{tabular}
\caption{$L^2$-error, EOC and number of iterations for Example \ref{Ex:Sphere_two}.}
\end{table}
\begin{Example}\label{Ex:Graph}
Let $\Gamma = \set{(x_1,x_2,x_3)^T\in\R^3}{x_3=x_1x_2\land x_1,x_2\in(0,1)}$ and $\alpha =10^{-3}$. For
\begin{equation*}
\min_{u\in\L2,\,y\in \Hone} J(u,y) \text{ subject to }
-\Delta_\Gamma y =u - r,\quad y=0\text{ on }\partial\Gamma\quad -0.5\le u\le0.5
\end{equation*}
we get
\[
\bar u = r = \max\big(-0.5,\min\big(0.5,\sin(\pi x)\sin(\pi y)\big)\big)
\]
by proper choice of $z$ (via symbolic differentiation).
\end{Example}
\begin{figure}
\center
\includegraphics[width =\textwidth]{alpha_1_5_6_high_crop.png}
\caption{Selected full Steps $u^+$ computed for Example \ref{Ex:Graph} on the twice refined grid.}
\end{figure}
\begin{table}
\begin{tabular}{r|cccccc}
reg. refs. & 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
$L2$-error & 3.5319e-01 & 6.6120e-02 & 1.5904e-02 & 3.6357e-03 &8.8597e-04 & 2.1769e-04\\
EOC & - & 2.4173 & 2.0557 & 2.1291 & 2.0369 & 2.0250\\
\# Steps & 11 & 12 & 12 & 11 & 13 & 12\\
\end{tabular}
\caption{$L^2$-error, EOC and number of iterations for Example \ref{Ex:Graph}.}
\end{table}
Example \ref{Ex:Graph}, although $\mathbf c=0$, is also covered by the theory in Sections \ref{S:Intro}-\ref{S:Implementation}, as by the Dirichlet boundary conditions the state equation remains uniquely solvable for $u\in\L2$.
In the last two examples we apply the variational discretization to optimization problems, that involve zero-mean-value constraints as in Section \ref{Ciszero}.
\begin{figure}
\center
\includegraphics[width =\textwidth]{alpha_3_crop.png}
\caption{Selected full Steps $u^+$ computed for Example \ref{Ex:Sphere} on once refined sphere.}
\end{figure}
\begin{table}
\begin{tabular}{r|cccccc}
reg. refs. &0 & 1 & 2 & 3 & 4 & 5\\
\hline
$L2$-error &6.7223e-01 & 1.6646e-01 & 4.3348e-02 & 1.1083e-02 & 2.7879e-03 & 6.9832e-04\\
EOC & - & 2.0138 & 1.9412 & 1.9677 & 1.9911 & 1.9972 \\
\# Steps & 8 & 8 & 7 & 7 & 6 & 6\\
\end{tabular}
\caption{$L^2$-error, EOC and number of iterations for Example \ref{Ex:Sphere}.}
\end{table}
\begin{figure}
\center
\includegraphics[width =\textwidth]{alpha_3_high_crop.png}
\caption{Selected full Steps $u^+$ computed for Example \ref{Ex:Torus} on the once refined torus.}
\end{figure}
\begin{table}
\begin{tabular}{r|cccccc}
reg. refs. &0 & 1 & 2 & 3 & 4 & 5\\
\hline
$L2$-error & 3.4603e-01 & 9.8016e-02 & 2.6178e-02 &6.6283e-03 & 1.6680e-03 & 4.1889e-04\\
EOC & - & 1.8198e+00 & 1.9047e+00 & 1.9816e+00 & 1.9905e+00 & 1.9935e+00\\
\# Steps & 9 & 3 & 3 & 3 & 2 & 2 \\
\end{tabular}
\caption{$L^2$-error, EOC and number of iterations for Example \ref{Ex:Torus}.}
\end{table}
\begin{Example}[Sphere II]\label{Ex:Sphere}
We consider
\begin{equation*}
\begin{split}
\min_{u\in\L2,\,y\in \Hone} J(u,y)
\text{ subject to } -\Delta_\Gamma y=u\,,\quad -1\le u\le 1\,,\quad \mint{\Gamma}{y}{\Gamma} = \mint{\Gamma}{u}{\Gamma}=0\,,
\end{split}
\end{equation*}
with $\Gamma$ the unit sphere in $\R^3$. Set $\alpha=10^{-3}$ and
\[
z(x_1,x_2,x_3) = 4\alpha x_3 + \left\{\begin{array}{rlc} \ln(x_3+1)+C \,,& \textup{if } &0.5\le x_3\\
x_3-\frac{1}{4}\textup{arctanh}(x_3)\,,&\textup{if } &-0.5\le x_3\le 0.5\\
-C-\ln(1-x_3)\,, & \textup{if }& x_3\le -0.5
\end{array}\right.\,,
\]
where $C$ is chosen for $z$ to be continuous.
The solution according to these parameters is
\[
\bar u = \min\big(1,\max\big(-1, 2 x_3\big)\big)\,.
\]
\end{Example}
\begin{Example}[Torus]\label{Ex:Torus}
Let $\alpha=10^{-3}$ and $$\Gamma = \set{(x_1,x_2,x_3)^T\in\R^3}{\sqrt{x_3^2+\left(\sqrt{x_1^2+x_2^2}-1\right)^2}=\frac{1}{2}}$$ the 2-Torus embedded in $\R^3$. By symbolic differentiation we compute $z$, such that
\[
\min_{u\in\L2,\,y\in \Hone} J(u,y)
\text{ subject to }
-\Delta_\Gamma y=u-r,\quad -1\le u\le 1\,,\quad \mint{\Gamma}{y}{\Gamma}=\mint{\Gamma}{u}{\Gamma}=0
\]
is solved by
\[
\bar u = r = \max\big(-1,\min\big(1,5xyz\big)\big)\,.
\]
\end{Example}
As the presented tables clearly demonstrate, the examples show the expected convergence behaviour.
\section*{Acknowledgement}
The authors would like to thank Prof. Dziuk for the fruitful discussion during his stay in Hamburg in November 2010.
\bibliographystyle{annotate}
\bibliography{/Users/morten/Documents/Bibs/all}
\end{document} | {"config": "arxiv", "file": "1101.1385/linear.tex"} |
TITLE: Big O notation for two series
QUESTION [0 upvotes]: I'm having trouble understanding how to analyze these two series for their Big O representations.
I can get the correct answer for this series
$1+2+3+4+...+N$ is $O(N^2)$, which I found by finding the Big O for the equation $N(N+1)/2$.
Apparently the correct answer for the following question is $O(N)$, but my result obviously differs substantially.
$1+5+25+125+...+N$ is $O(5^N)$
I figured that since the series equation is just $(5^N-1)/4$, I can just find the Big O for that equation, which would be $O(5^N)$
Where is my understanding here flawed?
REPLY [1 votes]: Note that the last element in your sum is $N$ rather than $5^N$. So, writing $N=5^k$, the sum is $1+5+\cdots+5^k$, which is $O\left(5^k\right)$ as you wrote; but $5^k=N$, so the sum is $O(N)$. | {"set_name": "stack_exchange", "score": 0, "question_id": 1981966} |
\section{Application}\label{extra}
We apply Proposition \ref{g} to the following examples in \cite{AGK17}.
By \cite[Ex. 1.4]{AGK17}, the blow-up $\bl_p S$ of the following $S=\PP(a,b,c)$ at the identity point $p$ is not a MDS:
\begin{align}
(a,b,c)=((m+2)^2,(m+2)^3+1,(m+2)^3 (m^2+2m-1)+m^2+3m+1),
\label{higherM}
\end{align}
where $m$ is a positive integer.
We briefly review the geometry on those $\bl_p S$.
By \cite[Thm. 1.1]{AGK17}, for every positive integer $m\geq 1$, there exists an irreducible polynomial $\xi_m\in \CC[x,y]$ such that $\xi_m$ has vanishing order $m$ at $(1,1)$ and the Newton polygon of $\xi_m$ is a triangle with vertices $(0,0), (m-1,0)$ and $(m,m+1)$.
Now the weighted projective plane $S$ above satisfies the conditions of \cite[Thm. 1.3]{AGK17}. Then by \cite[Thm. 1.3]{AGK17} and its proof, the polynomial $\xi_m$ above defines a curve $H$ in $S$, passing through $p$ with multiplicity $m$, such that the proper transform $C$ of $H$ in $\bl_p S$ is a negative curve. Then $C\neq e$.
The proof of \cite[Thm. 1.3]{AGK17} in fact shows that $H$ is the polarization given by the triangle $\Delta$ with vertices $(-\alpha,0), (m-1+\beta,0),(m,m+1)$, with
\[\alpha=\frac{1}{(m+2)^2}, \quad\beta=\frac{(m+2)^2+1}{(m+2)^3+1}.\]
Therefore on $S$ we have
\[H^2=2\mathrm{Area}(\Delta)=\frac{(m+1)^2 c}{ab}.\]
Let $B$ be the pseudo-effective divisor on $S$ generating $\Cl(S)\cong\Z$. Then $H\sim rB$ for some $r\in \Q_{>0}$. Since $B^2=1/abc$ and $H^2=r^2 B^2$, we have $r=c(m+1)$, so $[H]= c(m+1)[B]\in \Cl(S)$. Therefore $C\sim c(m+1)\pi^*B-me$.
When $m\geq 2$, those $S$ above have width $w\geq 1$, so Theorem \ref{main} does not apply to $S$. Nevertheless, by Proposition \ref{g}, we have the following examples:
\begin{corollary}\label{mgeq1}
Let $X=\PP(a,b,c,d_1,d_2,\cdots,d_{n-2})$ where
\[(a,b,c)=((m+2)^2,(m+2)^3+1,(m+2)^3 (m^2+2m-1)+m^2+3m+1),\]
such that $m\in \Z_{>0}$, every $d_i$ lies in the semigroup generated by $a,b$ and $c$, and that every \linebreak $d_i< abm/(m+1)$. Let $p$ be the identity point of the open torus in $X$. Then $\bl_p X$ is not a MDS.
\end{corollary} | {"config": "arxiv", "file": "1803.11536/application.tex"} |
TITLE: How to quickly tell if a set is linearly independent or not?
QUESTION [2 upvotes]: Before proving if a set is a basis for $R^n$, I have to determine if the set is linearly independent or not.
We aren't allowed to use matrices, and I want to save time during a quiz.
What are some things I should watch out for?
REPLY [0 votes]: The two basic observations, as others have pointed out, are that if there are more vectors than the dimension of the vector space, or if the zero vector is in the set, you surely have a dependent set. Additionally if there are identical vectors in the set, the set is dependent too (obviously). Other than this, there is no easy and quick way to check whether a set of vectors is dependant without using matrices (if you are allowed, just use row reduction to find the rank of the matrix whose column vectors are in the set, which conveniently tells you the number of independent vectors as well). | {"set_name": "stack_exchange", "score": 2, "question_id": 3081500} |
TITLE: Proof for the linear dependence of vectors
QUESTION [0 upvotes]: Let there be a vector space V and a subset A={a1... ar} ⊂ V. Leit it also be the case that 1≤ i ≤ r.
Show that the elements of A are linearly dependent, when there does exist an index i∈{1,...,r} and real numbers λj j∈{1,...,r}, j≠i so that the following is true:
(https://i.stack.imgur.com/i55Eh.jpg)
If somebody could help me out, I would be very happy. My guess is that I have to subtract ai from the sum of the other elements to get the zweo, but I have no idea how a formally correct answer might look like.
REPLY [0 votes]: It is essentially the definition of linearly dependency:
A set of vectors is said to be linearly dependent if at least one of the vectors in the set can be defined as a linear combination of the others | {"set_name": "stack_exchange", "score": 0, "question_id": 3452934} |
TITLE: angle of an inscribed triangle
QUESTION [1 upvotes]: I have a scalene triangle inscribed in a circle, one of its sides $a$ is $2\sqrt3$ and the length $r$ from that side to the center is $1$. I need to find the angle $x$ opposite to the side given. Here's how it looks:
How to find $x$?
REPLY [2 votes]: Draw in the radii from the center to the endpoints of $a$ to form two congruent right triangles, sharing a leg of length $r=1$ and each having another leg of length $\sqrt{3}$. From right triangle trigonometry, the angles in these right triangles at the center of the circle has measure $\arctan(\sqrt{3})=\frac{\pi}{3},$ so the measure of the entire central angle (angle at the center of the circle) that subtends the same arc as the chord $a$ is $2\arctan(\sqrt{3})=\frac{2\pi}{3},$ and the measure of the inscribed angle $x$ is half of that, $$\arctan(\sqrt{3})=\frac{\pi}{3}.$$
REPLY [1 votes]: http://www11.0zz0.com/2012/05/02/15/202577206.jpg
by using construction in the inscribed triangle
of the center $M$
draw $ML$ & $MN$
since $r$ perpendicular on $a$
since $LM = NM$ >> radius
& $r$ is a common side
therefore triangle $\triangle LMO$ congrant with triangle $\Delta NMO$
by using Pythagoras theorem
$r= 1$
$LO = \frac{1}{2}a$
$LM^2 = ON^2+r^2$
at $ON^2 = 3$
therefore $LM=2$
since side opposite to $30°$ half the hypotunes
therefore angle $\angle MNO = 30°$ and
angle $\angle MLO = 30°$ too!
IN triangle $\triangle LMO$
angle $\angle M = 180° - ( 30°+30° ) = 120°$
since angle $X$ half the centric angle
therefore angle $X = 60°$ | {"set_name": "stack_exchange", "score": 1, "question_id": 139932} |
\begin{document}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[subsection]
\newtheorem{theorem}[thm]{Theorem}
\newtheorem*{theorem*}{Theorem}
\newtheorem*{definition*}{Definition}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{sublemma}[thm]{Sublemma}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem*{corollary*}{Corollary}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{addendum}[thm]{Addendum}
\newtheorem{variant}[thm]{Variant}
\newtheorem{conjecture}[thm]{Conjecture}
\theoremstyle{definition}
\newtheorem{construction}[thm]{Construction}
\newtheorem{notations}[thm]{Notations}
\newtheorem{question}[thm]{Question}
\newtheorem{problem}[thm]{Problem}
\newtheorem{remark}[thm]{Remark}
\newtheorem{remarks}[thm]{Remarks}
\newtheorem{definition}[thm]{Definition}
\newtheorem{claim}[thm]{Claim}
\newtheorem{assumption}[thm]{Assumption}
\newtheorem{assumptions}[thm]{Assumptions}
\newtheorem{properties}[thm]{Properties}
\newtheorem{example}[thm]{Example}
\numberwithin{equation}{subsection}
\newcommand{\sA}{{\mathcal A}}
\newcommand{\sB}{{\mathcal B}}
\newcommand{\sC}{{\mathcal C}}
\newcommand{\sD}{{\mathcal D}}
\newcommand{\sE}{{\mathcal E}}
\newcommand{\sF}{{\mathcal F}}
\newcommand{\sG}{{\mathcal G}}
\newcommand{\sH}{{\mathcal H}}
\newcommand{\sI}{{\mathcal I}}
\newcommand{\sJ}{{\mathcal J}}
\newcommand{\sK}{{\mathcal K}}
\newcommand{\sL}{{\mathcal L}}
\newcommand{\sM}{{\mathcal M}}
\newcommand{\sN}{{\mathcal N}}
\newcommand{\sO}{{\mathcal O}}
\newcommand{\sP}{{\mathcal P}}
\newcommand{\sQ}{{\mathcal Q}}
\newcommand{\sR}{{\mathcal R}}
\newcommand{\sS}{{\mathcal S}}
\newcommand{\sT}{{\mathcal T}}
\newcommand{\sU}{{\mathcal U}}
\newcommand{\sV}{{\mathcal V}}
\newcommand{\sW}{{\mathcal W}}
\newcommand{\sX}{{\mathcal X}}
\newcommand{\sY}{{\mathcal Y}}
\newcommand{\sZ}{{\mathcal Z}}
\newcommand{\A}{{\mathbb A}}
\newcommand{\B}{{\mathbb B}}
\newcommand{\C}{{\mathbb C}}
\newcommand{\D}{{\mathbb D}}
\newcommand{\E}{{\mathbb E}}
\newcommand{\F}{{\mathbb F}}
\newcommand{\G}{{\mathbb G}}
\newcommand{\HH}{{\mathbb H}}
\newcommand{\I}{{\mathbb I}}
\newcommand{\J}{{\mathbb J}}
\renewcommand{\L}{{\mathbb L}}
\newcommand{\M}{{\mathbb M}}
\newcommand{\N}{{\mathbb N}}
\renewcommand{\P}{{\mathbb P}}
\newcommand{\Q}{{\mathbb Q}}
\newcommand{\R}{{\mathbb R}}
\newcommand{\SSS}{{\mathbb S}}
\newcommand{\T}{{\mathbb T}}
\newcommand{\U}{{\mathbb U}}
\newcommand{\V}{{\mathbb V}}
\newcommand{\W}{{\mathbb W}}
\newcommand{\X}{{\mathbb X}}
\newcommand{\Y}{{\mathbb Y}}
\newcommand{\Z}{{\mathbb Z}}
\newcommand{\id}{{\rm id}}
\newcommand{\rank}{{\rm rank}}
\newcommand{\END}{{\mathbb E}{\rm nd}}
\newcommand{\End}{{\rm End}}
\newcommand{\Hom}{{\rm Hom}}
\newcommand{\Hg}{{\rm Hg}}
\newcommand{\tr}{{\rm tr}}
\newcommand{\Sl}{{\rm Sl}}
\newcommand{\Gl}{{\rm Gl}}
\newcommand{\Cor}{{\rm Cor}}
\newcommand{\Aut}{\mathrm{Aut}}
\newcommand{\Sym}{\mathrm{Sym}}
\newcommand{\ModuliCY}{\mathfrak{M}_{CY}}
\newcommand{\HyperCY}{\mathfrak{H}_{CY}}
\newcommand{\ModuliAR}{\mathfrak{M}_{AR}}
\newcommand{\Modulione}{\mathfrak{M}_{1,n+3}}
\newcommand{\Modulin}{\mathfrak{M}_{n,n+3}}
\newcommand{\Gal}{\mathrm{Gal}}
\newcommand{\Spec}{\mathrm{Spec}}
\newcommand{\res}{\mathrm{res}}
\newcommand{\coker}{\mathrm{coker}}
\newcommand{\Jac}{\mathrm{Jac}}
\newcommand{\HIG}{\mathrm{HIG}}
\newcommand{\MIC}{\mathrm{MIC}}
\maketitle
\begin{abstract}
The $p$-adic Simpson correspondence due to Faltings\cite{Faltings 2005} is a $p$-adic analogue of non-abelian Hodge theory. The following is the main result of this article: The correspondence for line bundles can be enhanced to a rigid analytic morphism of moduli spaces under certain smallness conditions. In the complex setting, Simpson shows that there is a complex analytic morphism from the moduli space for the vector bundles with integrable connection to the moduli space of representations of a finitely generated group as algebraic varieties. We give a $p$-adic analogue of Simpson's result.
\textbf{Keywords} Arithmetic algebraic geometry, $p$-adic Hodge theory, rigid geometry, Higgs bundles.
\textbf{Mathematical Subject Classificication} 14G22, 14H60, 14F35
\end{abstract}
\tableofcontents
\section{Introduction}
In the complex case, Simpson correspondence generalizes the classical Narasimhan-Sashadri correspondence on Riemann surface in the case of vanishing Higgs field. To be more precise, let $X$ be a compact Kahler manifold. A Higgs bundle is a pair consisting of a holomorphic vector bundle $E$ and a holomorphic map $\theta: E \rightarrow E \otimes \Omega_{X}^{1}$ such that $\theta \wedge \theta=0$. Simpson established a one-to-one correspondence between irreducible representations of $\pi_{1}(X)$ and
stable Higgs bundles with vanishing Chern classes (See \cite{Simpson 1992} for more details).
Faltings developed a $p$-adic analogue of Simpson correspondence in the case of curves. The main theorem asserts that there exists an equivalence of categories between Higgs bundles and generalized representations, if we allow $\mathbb{C}_{p}$ coefficients. Denote $\mathbb{C}_{p}=\widehat{\overline{\mathbb{Q}_{p}}}$. Let $\textbf{o}$ be the ring of integers in $\mathbb{C}_{p}$.
In this article, we show that the $p$-adic Simpson correspondence for line bundles is rigid analytic under suitable conditions by viewing the moduli spaces of both sides as points of rigid analytic spaces. This question is the $p$-adic analogue of the fact that there is a complex analytic morphism (but not algebraic) from the moduli space for the vector bundles with integrable connection to the moduli space of representations of the fundamental group as algebraic varieties (See \cite{Simpson 1994} and \cite{Simpson}).
Turning to details, the problem can be divided into two cases.
\subsubsection{Vector bundle case}
Let $X$ be a smooth proper curve over $\overline{\mathbb{Q}_{p}}$. In \cite{Deninger and Werner 2005a}, C. Deninger and A. Werner defined functorial isomorphisms of parallel transport along \'{e}tale paths for a class of vector bundles on $X_{\mathbb{C}_{p}}=X\times_{Spec \overline{\mathbb{Q}_{p}}} Spec \mathbb{C}_{p}$. The category of such vector bundles is denoted by $\mathcal{B}_{\mathbb{C}_{p}}^{s}$ and contains all vector bundles of degree 0 that have strongly semistable reduction. In particular, all vector bundles in $\mathcal{B}_{\mathbb{C}_{p}}^{s}$ give rise to continuous representations of the algebraic fundamental group on finite dimensional $\mathbb{C}_{p}$-vector spaces. The construction of C. Deninger and A. Werner is compatible with the construction of G. Faltings\cite{Faltings 2005} if the Higgs field $\theta$ is zero.
As a special case of the line bundles, suppose that $X$ is a curve with good reduction over $\overline{\mathbb{Q}_{p}}$. Denote the Albanese variety of $X$ by $A$ and the genus of $X$ by $g$. Let $K$ be a finite extension of $\mathbb{Q}_{p}$ in $\overline{\mathbb{Q}_{p}}$ so that $A$ is defined over $K$, i.e, $A=A_{K}\otimes \overline{\mathbb{Q}_{p}}$ for any abelian variety $A_{K}$ over $K$. By \cite{Deninger and Werner 2005a} (line 30-32 of p.20) we have a continuous, $Gal(\overline{\mathbb{Q}_{p}}/K)$\footnote{In \cite{Deninger and Werner 2005a}, the notation $G_{K}$ on p.20 stands for $Gal(\overline{\mathbb{Q}_{p}}/K)$, this arises in line 11 of p.10}-equivariant homomorphism
\[ \alpha: Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \longrightarrow Hom_{c}(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*}). \]
Here $Hom_{c}(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})$ is the topological group of continuous $\mathbb{C}_{p}^{*}$-valued characters of the algebraic fundamental group $\pi_{1}(X,x)$.
For the rest of the article, suppose that $X/\overline{\mathbb{Q}_{p}}$ has good reduction. Our main result on line bundles is the following:
\begin{theorem} (Theorem 3.2.6)
Assume that $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is topologically $p$-torsion. Then we may enhance the set-theoretical map
\[ \alpha: Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \longrightarrow Hom_{c}(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*}) \] to a rigid analytic morphism
\[ \alpha^{an}: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow
(\mathbb{G}_{m}^{2g})^{an}. \]
\end{theorem}
The main idea is to regard the morphism $\alpha^{an}$ as gluing by the rigid analytic morphism on the affinoid neighborhood from the viewpoint of the Lie algebra map. The motivation comes from the Taylor expansion and then consider the tangent space at the origin.
\subsubsection{Higgs bundle case}
For the Higgs field of rank one Higgs bundles, Faltings constructed the morphism
\[ \Gamma(X, \Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1) \longrightarrow Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})\]
The above map is the exponential of a $\mathbb{C}_{p}$-linear map into the Lie algebra of the group of representations. More precisely, the $\mathbb{C}_{p}$-linear map is
\[ \Gamma(X,\Omega_{X}^{1}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}(-1) \longrightarrow (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)) \oplus (H^{1}(X,\mathcal{O}_{X}) \otimes \mathbb{C}_{p}) \]
Using the Hodge-Tate decomposition, we have the isomorphism
\[ H^{1}_{\acute{e}t}(X,\overline{\mathbb{Q}_{p}}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p} \simeq (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)) \oplus (H^{1}(X,\mathcal{O}_{X}) \otimes \mathbb{C}_{p}).\]
Note that $Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}) \simeq H^{1}_{\acute{e}t}(X,\overline{\mathbb{Q}_{p}}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}$, then the exponential map from $\mathbb{C}_{p}$ to $\mathbb{C}_{p}^{*}$ induces the map
\[ Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}) \longrightarrow Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}^{*}). \]
Composing the maps together, we get the desired map.
A morphism of rigid analytic spaces is called \textit{locally rigid analytic} if it is rigid analytic on an open subset. In the next proposition, we show that the morphism from the moduli space of the Higgs bundles to the moduli space of representations is locally rigid analytic with small conditions.
\begin{proposition} (Proposition 4.1.5)
One can enhance the map
\[ \Gamma(X, \Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1) \longrightarrow Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})\] to the morphism of rigid analytic spaces
\[ \Gamma(X,\Omega_{X}^{1}) \otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a} \longrightarrow (\mathbb{G}_{m}^{2g})^{an} \]
corresponding to the small representations of $Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})$.
\end{proposition}
Finally, there is a bijection between Higgs bundles of degree 0 and representations in rank one case by \cite{Faltings 2005}. The main result of this article can be formulated as follows.
\begin{theorem} (Theorem 4.2.1)
Under some mild conditions in Theorem 3.2.6 and Proposition 4.1.5 and the assumption that \[f: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X,\Omega_{X}^{1})\otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}) \longrightarrow (\mathbb{G}_{m}^{2g})^{an}\] is an affinoid morphism, we deduce that the $p$-adic Simpson correspondence for line bundles
\[ Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})_{small} \longrightarrow Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \times (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1))_{small} \] can be enhanced to the rigid analytic morphism defined on the open rigid analytic subspace $U \subset (\mathbb{G}_{m}^{2g})^{an}$
\[ (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X, \Omega_{X}^{1}) \otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}). \]
\end{theorem}
\begin{remark}
There is another proof of Theorem 1.0.3 in section 5 of \cite{Heuer}. To be more precise, for a smooth rigid analytic space $X$ over a perfectoid extension $K$ of $\mathbb{Q}_{p}$, they use the diamantine universal cover $\tilde{X} \longrightarrow X$ to construct the $p$-adic Simpson correspondence of rank one.
\end{remark}
\subsection*{Acknowledgements}
The work is part of my PhD thesis. I would like to thank my advisor, Mao Sheng, for his invaluable guidance. I am also grateful to the anonymous referee for their helpful comments on the first version of this article. The author was supported by NSFC under Grant Nos. 11721101.
\section{Preliminaries}
In this section, we present some basic knowledge in rigid geometry. We shall use these results in the latter sections to prove our main theorems.
\subsection{Maximum principle for affinoid $K$-algebra}
Let $K$ be a field with a complete nonarchimedean absolute value that is nontrivial. For integers $n \geq 1$, let
\[ \mathbb{B}^{n}(\overline{K})=\{(x_{1},\dots,x_{n}) \in \overline{K}^{n}: |x_{i}| \leq 1\} \]
be the unit ball in $\overline{K}^{n}$.
\begin{definition}
The $K$-algebra $T_{n}=K\langle\xi_{1},\dots,\xi_{n}\rangle$ of all formal power series
\[ \sum \limits_{\nu \in \mathbb{N}^{n}} c_{\nu} \xi^{\nu} \in K[[\xi_{1},\dots,\xi_{n}]], \quad c_{\nu} \in K, \quad \lim \limits_{|\nu| \rightarrow \infty} |c_{\nu}|=0. \]
i.e, converging on $\mathbb{B}^{n}(\overline{K})$, is called the \textit{Tate algebra} of restricted convergent power series.
\end{definition}
We define the \textit{Gauss norm} on $T_{n}$ by setting
\[ |f|=max|c_{\nu}|\quad for \quad f=\sum \limits_{\nu} c_{\nu} \xi^{\nu}. \]
\begin{definition}
A $K$-algebra $A$ is called an \textit{affinoid $K$-algebra} if there is an epimorphism of $K$-algebra $\alpha: T_{n} \longrightarrow A$ for some $n\in \mathbb{N}$.
\end{definition}
For elements $f\in A$, set
\[ |f|_{sup}=\sup \limits_{x \in Max \,A} |f(x)| \]
where $Max \,A$ is the spectrum of maximal ideals in $A$ and for any $x \in Max \,A$, write $f(x)$ for the residue class of $f$ in $A/x$.
\begin{remark}
In general case, $|\,\,|_{sup}$ is a $K$-algebra seminorm. Moreover, $|f|_{sup}=0$ is equivalent to that $f$ is nilpotent.
\end{remark}
Now we give an nonarchimedean analogue of the maximum principle in complex analysis.
\begin{theorem}(Theorem 3.1.15 \cite{Bosch})
For any affinoid $K$-algebra $A$ and for any $f \in A$, there exists a point $x \in Max \, A$ such that $|f(x)|=|f|_{sup}$.
\end{theorem}
\subsection{Canonical topology on affinoid $K$-space}
Let $A$ be an affinoid $K$-algebra, the elements of $A$ can be viewed as functions on $Max \, A$. More precisely, let us define $f(x)$ for $f \in A$ and $x \in Max \, A$ as the residue class of $f$ in $A/x$. Write $Sp\,A$ for the set $Max \, A$ together with its $K$-algebra of functions $A$ and call it the \textit{affinoid $K$-space} associated to $A$.
For an affinoid $K$-space $X=Sp \, A$, set
\[ X(f,\varepsilon)= \{ x \in X: |f(x)| \leq \varepsilon \} \]
for $f \in A$ and $\varepsilon \in \mathbb{R}_{>0}$.
\begin{definition}
For any affinoid $K$-space $X=Sp \, A$, the topology generated by all sets of type $X(f;\varepsilon)$ with $f \in A$ and $\varepsilon \in \mathbb{R}_{>0}$ is called the canonical topology of $X$. Define $X(f):=X(f;1)$.
\end{definition}
Roughly speaking, an \textit{affinoid subdomain} is an open subset with respect to canonical topology which has the structure of affinoid $K$-space (See Section 3.3 of \cite{Bosch}). A subset in $X$ of type
\[ X(f_{1},\dots,f_{r})=\{ x \in X: |f_{i}(x)| \leq 1 \} \]
for functions $f_{1},\dots,f_{r} \in A$ is called \textit{Weierstrass domain} in $X$.
Now we provide a basis for the canonical topology of affinoid $K$-space.
\begin{proposition}(Proposition 3.3.5 \cite{Bosch})
Let $X=Sp \, A$ be an affinoid $K$-space and $x \in X$ correspond to the maximal ideal $m_{x} \subset A$. Then the sets $X(f_{1},\dots,f_{r}):=X(f_{1}) \cap \dots \cap X(f_{r})$ for $f_{1},\dots,f_{r} \in m_{x}$ and variable $r$ form a basis of neighborhoods of $x$.
\end{proposition}
Finally, the next proposition shows that morphism between affinoid $K$-spaces is continuous with respect to the canonical topology.
\begin{proposition}(Proposition 3.3.6 \cite{Bosch})
Let $\varphi^{*}:A \longrightarrow B$ be a morphism of affinoid $K$-algebras, and let $\varphi:Sp\, B \longrightarrow Sp \, A$ be the associated morphism of affinoid $K$-spaces. Then for $f_{1},\dots,f_{r} \in A$, we have
\[ \varphi^{-1}((Sp\,A)(f_{1},\dots,f_{r}))=(Sp\,B)(\varphi^{*}(f_{1}),\dots,\varphi^{*}(f_{r})).\]
In particular, $\varphi$ is continuous with respect to the canonical topology.
\end{proposition}
\subsection{GAGA functor on rigid analytic space}
Let $X$ be an affinoid $K$-space. For any affinoid subdomain $U \subset X$ we denote the affinoid $K$-algebra corresponding to $U$ by $\mathcal{O}_{X}(U)$. $\mathcal{O}_{X}$ is a presheaf of affinoid $K$-algebras on the category of affinoid subdomains of $X$. Moreover, $\mathcal{O}_{X}$ is a sheaf by Tate's acyclicity Theorem as follows:
\begin{theorem}(Theorem 4.3.10 \cite{Bosch})
Let $X$ be an affinoid $K$-space and $\mathcal{U}$ is a finite covering of $X$ by affinoid subdomains. Then $\mathcal{U}$ is acyclic with respect to the presheaf $\mathcal{O}_{X}$ of affinoid functions on $X$.
\end{theorem}
\begin{definition}
A \textit{rigid analytic $K$-space} is a locally $G$-ringed space $(X,\mathcal{O}_{X})$ such that
(i) the Grothendieck topology of $X$ satisfies $(G_{0}),(G_{1})$ and $(G_{2})$ of 5.1/5 in \cite{Bosch}.
(ii) $X$ admits an admissible covering $(X_{i})_{i \in I}$ where $(X_{i}, \mathcal{O}_{X}|_{X_{i}})$ is an affinoid $K$-space for all $i \in I$.
\end{definition}
A \textit{morphism of rigid $K$-spaces} $(X,\mathcal{O}_{X}) \longrightarrow (Y, \mathcal{O}_{Y})$ is a morphism in the sense of locally $G$-ringed $K$-spaces. As usual, global rigid $K$-spaces can be constructed by gluing local ones.
Now there is a functor that associates any $K$-scheme $Z$ of locally of finite type to a rigid $K$-space $Z^{an}$, called the \textit{rigid analytification} of $Z$. For more details, see \cite{Bosch}.
\begin{proposition}(Proposition 5.4.4 \cite{Bosch})\label{proposition 2.10}
Every $K$-scheme $Z$ of locally finite type admits an analytification $Z^{an} \longrightarrow Z$.
Furthermore, the underlying map of sets identifies the points of $Z^{an}$ with the closed points of $Z$.
\end{proposition}
\begin{remark}
When $X$ is a variety over $K$, one can identify the points of $X^{an}$ with the points of $X$.
\end{remark}
\section{Vector bundle case}
\subsection{Lie algebra of Deninger-Werner's map $\alpha$}
Suppose that $X$ is a curve with good reduction over $\overline{\mathbb{Q}_{p}}$ (i.e. there exists a smooth, proper, finitely presented model $\mathcal{X}$ of $X$ over $\overline{\mathbb{Z}_{p}}$) and denote the genus of $X$ by $g$. By \cite{Deninger and Werner 2005a} we obtain a continuous, Galois equivariant homomorphism
\[ \alpha: Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \longrightarrow Hom_{c}(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*}). \]
For the left-hand side, denote the Albanese variety of $X$ by $A$. Then the dual abelian variety $\widehat{A}$ of $A$ is isomorphic to $Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}$ via the $\Theta$-polarization. Consequently $\widehat{A}(\mathbb{C}_{p}) =
Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p})$. For the right-hand side, first we claim that the Tate module $TA$ of the abelian variety $A$ can be identified with the maximal abelian quotient of the \'{e}tale fundamental group $\pi_{1}(X,x)$. In fact, let $f: X \rightarrow A$ be the embedding mapping $x$ to 0. Then $f$ induces the homomorphism
\[f_{*}: \pi_{1}(X,x) \longrightarrow \pi_{1}(A,0). \]
Note that $\overline{\mathbb{Q}_{p}}$ is an algebraically closed field, Lang-Serre Theorem (refer to \cite{Gerard van der Geer and Ben Moonen}, p.159 Theorem (10.36)) asserts that any connected finite \'{e}tale cover of $A$ is also an abelian variety. It deduces that the \'{e}tale fundamental group of $A$ is canonically isomorphic to its Tate module.
Since $TA \simeq \prod \limits_{p \,prime}\mathbb{Z}_{p}^{2d}$ where $d=dim(A)$, the \'{e}tale fundamental group of $A$ is an abelian group. By universal property of Albanese variety, it follows that the Tate module $TA$ is isomorphic to $\pi_{1}^{ab}(X,x)$ as desired. Since the \'{e}tale fundamental group of $X$ is compact with respect to the profinite topology, we deduce that $\pi_{1}^{ab}(X)$ is compact. Note that $\mathbb{C}_{p}^{*}=p^{\mathbb{Q}} \times \textbf{o}^{*}$ and the fact that every element of the image of 1-dimensional representation of the compact group has norm 1, thus \[Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}^{*})=Hom(\pi_{1}^{ab}(X),\textbf{o}^{*})=Hom(TA,\textbf{o}^{*}).\]
From the previous argument, the map $\alpha$ can be rephrased in terms of the Albanese variety $A$ of $X$ as a continuous, Galois equivariant homomorphism
\begin{align}\label{(3.0.1)}
\alpha: \widehat{A}(\mathbb{C}_{p}) \longrightarrow Hom_{c}(TA, \textbf{o}^{*}).
\end{align}
By Corollary 14 of \cite{Deninger and Werner 2005b}, $\alpha$ is a $p$-adically analytic map of $p$-adic Lie groups.
Let us now determine the Lie algebra map induced by $\alpha$.
From \cite{Deninger and Werner 2005b}, we have the exact sequence of the logarithm on the Lie group $\widehat{A}(\mathbb{C}_{p})$
\[ \xymatrix{0 \ar[r] & \widehat{A}(\mathbb{C}_{p})_{tors} \ar[r] & \widehat{A}(\mathbb{C}_{p}) \ar[r]^-{log} & Lie\widehat{A}(\mathbb{C}_{p})=H^{1}(A,\mathcal{O}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p} \ar[r]& 0\\}. \]
On the torsion subgroups, $\alpha$ is an isomorphism (Proposition 10 of \cite{Deninger and Werner 2005b}), so that we get the following commutative diagram with exact lines:
\begin{align}\label{(3.0.2)}{ \xymatrix{0 \ar[r] & \widehat{A}(\mathbb{C}_{p})_{tors} \ar[d]^{\simeq} \ar[r] & \widehat{A}(\mathbb{C}_{p}) \ar[d]^{\alpha} \ar[r]^{log} & Lie\widehat{A}(\mathbb{C}_{p}) \ar[d]^{Lie\,\alpha}\ar[r]& 0\\
0 \ar[r] & Hom_{c}(TA,\mu) \ar[r] & Hom_{c}(TA,\mathbb{C}_{p}^{*}) \ar[r] & Hom_{c}(TA,\mathbb{C}_{p}) \ar[r] & 0}}\end{align}
Here $\mu$ is the subgroup of the roots of unity in $\mathbb{C}_{p}$.
\subsection{Vector bundles give rise to representations}
The goal of this section is to construct a rigid analytic morphism
\[ \alpha^{an}: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow
(\mathbb{G}_{m}^{2g})^{an} \]
such that the points of $\alpha^{an}$ coincide with the map $\alpha$ as (\ref{(3.0.1)}).
Let us first consider the points of both sides. For the left-hand side, using Remark 2.3.4 we have $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}(\mathbb{C}_{p})=Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p})$. For the right-hand side, one can deduce that $(\mathbb{G}_{m}^{2g})^{an}(\textbf{o})=(\mathbb{G}_{m}^{2g})(\textbf{o})=(\textbf{o}^{*})^{2g}=Hom_{c}(TA,\textbf{o}^{*})$.
To prove that $\alpha^{an}$ exists as a rigid analytic function, it suffices to construct it on an affinoid open subset. We break the proof into the following steps:
\subsubsection{Step1: Reduction to affinoid neighbourhood}
The morphism
\[\alpha^{an}: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow (\mathbb{G}_{m}^{2g})^{an}\]
will map $0$ to $1$. On the point level, we have $\alpha^{an}(\mathbb{C}_{p})=\alpha(\mathbb{C}_{p})$. Fix an element $x_{0}\in (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ and define $y_{0}=\alpha(x_{0})$. Observe that both sides have rigid group structure. Consider the translation maps
\[ T_{x_{0}}(x)=x+x_{0},\quad T_{y_{0}}(y)=y\cdot y_{0}\]
and the following diagram
\begin{align}\label{(3.0.3)}{\xymatrixcolsep{5pc}\xymatrix{ U_{0} \ar[d]^{T_{x_{0}}} \ar[r]^{\alpha|_{U_{0}}} & U_{1} \ar[d]^{T_{y_{0}}}\\
U_{x_{0}} \ar[r]^{\alpha|_{U_{x_{0}}}} & U_{y_{0}}} }\end{align}
where $U_{1}$ is the neighborhood of $1 \in (\mathbb{G}_{m}^{2g})^{an}$ and $U_{0}=\alpha^{-1}(U_{1}),U_{y_{0}}=T_{y_{0}}(U_{1}),U_{x_{0}}=T_{x_{0}}(U_{0}) \cap \alpha^{-1}(U_{y_{0}})$ are open with respect to strong Grothendieck topology.
For any $x \in U_{0}$, we have
\[ (\alpha|_{U_{x_{0}}}\circ T_{x_{0}})(x)=\alpha(x+x_{0})=\alpha(x) \cdot \alpha(x_{0})=(T_{y_{0}}\circ \alpha|_{U_{0}})(x) \]
Hence
\[ \alpha|_{U_{x_{0}}}\circ T_{x_{0}}=T_{y_{0}}\circ \alpha|_{U_{0}}.\]
Namely, the above diagram is commutative.
It suffices to prove that $\alpha|_{U_{0}}$ is rigid analytic. Then $\alpha|_{U_{x_{0}}}=T_{y_{0}}\circ \alpha|_{U_{0}}\circ T_{-x_{0}}$ is rigid analytic. Assume that $\alpha$ is continuous with respect to strong Grothendieck topology. It yields that $\alpha^{an}$ is rigid analytic by gluing the rigid analytic morphisms.
The techinique step is to shrink $U_{0}$ to an open affinoid neighborhood in order to give the natural transformation of the sheaf of rigid analytic functions.
Before shrinking, let us construct logarithm on rigid analytic space in the following diagram.
\[ \xymatrixcolsep{5pc}\xymatrix{(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \ar[d] \ar[r]^{log} & H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a} \ar[d]^{l=(Lie\alpha)^{an}}\\
(\mathbb{G}_{m}^{2g})^{an} \ar[r]^{log} & (\mathbb{G}_{a}^{2g})^{an}} \]
Here we identify $Lie\, \textbf{o}^{*}$ with $\textbf{o}$ by means of the invariant differential $\frac{dT}{T}$ on $\mathbb{G}_{m}=Spec\, \textbf{o}[T,T^{-1}]$ and $Lie\, Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p})=Lie\, \widehat{A}(\mathbb{C}_{p})=H^{1}(A,\mathcal{O})\otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}$.
(\romannumeral 1) $log: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$
\vspace{0.2cm}
We start with the definition of the rigid analytic $p$-divisible group introduced by Fargues \cite{Fargues}. A $p$-divisible group is defined as $p$-torsion, $p$-divisible such that $ker[p]$ is a finite group scheme by \cite{Bergamaschi} (line 7-9 of p.8). The motivation is to release the "$p$-torsion" condition of $p$-divisible group by replacing it with a condition of "topological $p$-torsion". Thus the rigid analytic $p$-divisible group can be viewed as a generalization of $p$-divisible group in rigid analytic setting.
Let $K/\mathbb{Q}_{p}$ be a complete valued field.
\begin{definition}
A commutative rigid analytic $K$-group $G$ is said to be \textit{rigid analytic $p$-divisible group} if
(i) (Topologically $p$-torsion\footnote{The notion "topologically $p$-torsion" is in the sense of Berkovich space. The Berkovich spectrum can be viewed as consisting of the data of prime ideal plus the extension of the norm to the residue field. Thus the Berkovich space $|G^{Berk}|$ has far more points than the rigid analytic space $|G^{an}|$.})
For any $g \in G^{Berk}$, $\lim \limits_{n \rightarrow \infty} p^{n}g=0$ with respect to the topology of the Berkovich space $|G^{Berk}|$. Equivalently, if $U$ and $V$ are two affinoid neighborhoods of 0, then for $n \gg 0, p^{n}U \subset V$.
(ii) The morphism $\times p: G \longrightarrow G$ is finite and surjective.
\end{definition}
Let $G$ be a rigid analytic $p$-divisible group. By Proposition 16 of \cite{Fargues}, there is a natural logarithm morphism of rigid analytic spaces
\begin{align}\label{(3.2.2)} {log_{G}: G \longrightarrow Lie(G) \otimes \mathbb{G}_{a}}. \end{align}
Assume that $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is topologically $p$-torsion. Note that $Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}$ is an abelian variety, it follows that $[p]: Pic^{0}_{X/\overline{\mathbb{Q}_{p}}} \longrightarrow Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}$ is an isogeny. The morphism $\times p: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is finite and surjective by Lemma 5 of \cite{Fargues}, we deduce that $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is a rigid analytic $p$-divisible group. Thus the logarithm defined in (\ref{(3.2.2)}) induces the logarithm of rigid analytic spaces
\[ log: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}. \]
The $\mathbb{C}_{p}$-points of $H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$ are $H^{1}(A,\mathcal{O}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}$. Consequently the $\mathbb{C}_{p}$-points of $log$ coincide with the logarithm on the Lie group $Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p})$. Moreover, $log: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$ is a local isomorphism (i.e, there exist open subsets $U \subset (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ and $V \subset H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$ such that $U \simeq V$).
(\romannumeral 2) $log: (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (\mathbb{G}_{a}^{2g})^{an}$
\vspace{0.2cm}
Note that topologically $p$-torsion elements of $\mathbb{G}_{m}^{an}$ is $\widehat{\mathbb{G}}_{m}^{an}:=\{ x\in \mathbb{G}_{m}^{an}:|x-1|<1 \}$ (Example 5(1) of \cite{Fargues} on p.17), we deduce that the logarithm $log: \mathbb{G}_{m}^{an} \longrightarrow \mathbb{G}_{a}^{an}$ is defined on $\widehat{\mathbb{G}}_{m}^{an}$. By (\ref{(3.2.2)}), the logarithm $log: (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (\mathbb{G}_{a}^{2g})^{an}$ is defined on the open set of topologically $p$-torsion points of $(\mathbb{G}_{m}^{2g})^{an}$.
For the $\textbf{o}$-points, we have
\begin{align}\label{(3.0.4)} {log: (\mathbb{G}_{m}^{2g})^{an}(\textbf{o})=(\textbf{o}^{*})^{2g} \longrightarrow \textbf{o}^{2g}=(\mathbb{G}_{a}^{2g})^{an}(\textbf{o})}\end{align}
Consider each component $log: \textbf{o}^{*} \longrightarrow \textbf{o}$, set $V_{0}=\{x \in \textbf{o}^{*}: |x-1|<p^{-\frac{1}{p-1}} \}$ and $W_{0}=\{ x \in \mathbb{C}_{p}: |x|<p^{-\frac{1}{p-1}} \}$. The logarithm provides an isomorphism
\[ log: V_{0} \longrightarrow W_{0} \]
whose inverse is the exponential map. It follows that the logarithm sending $1$ to $0$ is a local homeomorphism. Taking product topology on both sides (\ref{(3.0.4)}), we have $log: (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (\mathbb{G}_{a}^{2g})^{an}$ is a local homeomorphism(power series converges and hence continuous).
As a byproduct, we give the explicit expression of restriction of $\alpha$ on affinoid neighborhood.
\begin{definition}
The rigid analytic space $(X,\mathcal{O}_{X})$ is \textit{reduced} if for any admissible open subset $U\subset X$, $\mathcal{O}_{X}(U)$ has no nilpotent elements.
\end{definition}
\begin{definition}
A morphism $f: X\longrightarrow Y$ of rigid analytic spaces is called an \textit{affinoid morphism} if there is an admissible affinoid covering $\{V_{i}\}$ of $Y$ such that $f^{-1}(V_{i})$ is an affinoid space for each $i$.
\end{definition}
\begin{proposition}
Let $K$ be a nonarchimedean field of char 0. If $X$ and $Y$ are rigid analytic spaces over $K$ and $X$ is reduced, then for any affinoid morphism $h: X \longrightarrow Y$, $h(\widehat{\overline{K}})=0$ implies $h=0$.
\end{proposition}
\begin{proof}
Let $Y=\cup _{i\in I} Sp B_{i}$ be an admissible affinoid covering. Since $h$ is an affinoid morphism, we obtain $h^{-1}(Sp B_{i})=Sp A_{i}$ for some affinoid algebra $A_{i}$. So we reduce to the affinoid case.
Let the corresponding morphism of affinoid algebras $B\longrightarrow A$ send $b$ to $a$. By maximum principle Theorem 2.1.4, there is a point $x\in Max\,A$ such that $|a|_{sup}=|a(x)|$. Let $\varphi:A\longrightarrow \mathbb{C}_{p}$ be the residue map of $x \in Max\,A$. By assumption, $h(\mathbb{C}_{p})=0$. From the
commutative diagram
\[ \xymatrix{B \ar[d] \ar[r]^{0} & \textbf{o}\ar[d]\\
A \ar[r]^{\varphi} & \mathbb{C}_{p}}\]
one can deduce that $|a|_{sup}=|a(x)|=0$. Since $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is reduced, it follows that $A$ is reduced. Hence $a=0$ and $h=0$ as desired.
\end{proof}
Since $X$ is a smooth projective curve, we have $Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}$ is reduced. It follows that $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is reduced. The next corollary gives the explicit expression of $\alpha|_{U_{0}}$.
\begin{corollary}
Suppose that $\alpha|_{U_{0}}-exp \circ l \circ log$ is an affinoid morphism, we have $\alpha|_{U_{0}}=exp \circ l \circ log$.
\end{corollary}
\begin{proof}
From the commutative diagram (\ref{(3.0.2)}), we have \[\alpha|_{U_{0}}(\mathbb{C}_{p})=exp \circ l \circ log(\mathbb{C}_{p}).\] By Proposition 3.2.4, it follows that $\alpha|_{U_{0}}=exp \circ l \circ log$.
\end{proof}
\vspace{0.2cm}
Now we turn to the shrinking step.
The logarithm $log: (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (\mathbb{G}_{a}^{2g})^{an}$ induces the isomorphism
\[ log: V_{0}=\{x \in \textbf{o}^{*}: |x-1|<r_{p}:=p^{-\frac{1}{p-1}}\} \longrightarrow W_{0}=\{ x \in \mathbb{C}_{p}: |x|<p^{-\frac{1}{p-1}}\}. \]
First we shrink $V_{0}$ and $W_{0}$ to affinoid space. Choose $\varepsilon \in \mathbb{R}_{>0}$ such that $r_{p}^{\prime}=r_{p}-\varepsilon \in |\mathbb{C}_{p}^{\times}|=|p|^{\mathbb{Q}}$, and thus $V_{0}^{\prime}:=\{x\in \textbf{o}^{*}: |x-1| \leq r_{p}^{\prime} \}\subset V_{0}$ is an affinoid space. Set $W_{0}^{\prime}:=\{ x\in \mathbb{C}_{p}: |x|\leq r_{p}^{\prime}\}\subset W_{0}$ and $W_{0}^{\prime}$ is again an affinoid space. Let $x\in V_{0}$ and $y=x-1$. It follows that $|log(x)|=|log(1+y)|=|y|\,\, for\, |y|<r_{p}$. Hence $log(V_{0}^{\prime})=W_{0}^{\prime}$ and the logarithm provides the isomorphism
\[ log: V_{0}^{\prime} \longrightarrow W_{0}^{\prime}. \]
Now consider the logarithm $log: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$. Find an affinoid neighborhood $Sp A\subset H^{1}(A,\mathcal{O}) \otimes \mathbb{G}_{a}$ of $0$ and shrink the inverse image $log^{-1}(Sp A)$ to an affinoid neighborhood $Sp B$ of $0\in (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$. For the morphism of affinoid spaces $\varphi=log|_{Sp B}: Sp B \longrightarrow Sp A$, let it correspond to a morphism of affinoid algebras $\varphi^{*}: A \longrightarrow B$. Since $log$ is a local isomorphism, there are admissble open subsets $V_{1}\subset Sp B$ and $W_{1} \subset Sp A$ such that $V_{1} \simeq W_{1}$. Let $W_{1}^{\prime}:=W_{1}\cap l^{-1}(W_{0}^{\prime})$ and $V_{1}^{\prime}:=log^{-1}(W_{1}\cap l^{-1}(W_{0}^{\prime}))$, then $V_{1}^{\prime} \simeq W_{1}^{\prime}$. Note that the canonical topology is finer than the Grothendieck topology on affinoid spaces, applying Proposition 2.2.3, there exist $f_{1},\dots,f_{r} \in A$ such that $(Sp A)(f_{1},\dots,f_{r}) \subset W_{1}^{\prime}$. Furthermore, we have
\[ \varphi^{-1}((Sp A)(f_{1},\dots,f_{r}))=(Sp B)(\varphi^{*}(f_{1}),\dots,\varphi^{*}(f_{r}))\]
Observe that $(Sp A)(f_{1},\dots,f_{r})$ and $(Sp B)(\varphi^{*}(f_{1}),\dots,\varphi^{*}(f_{r}))$ are Weierstrass domains, thus open affinoids. Since $\varphi$ is continuous with respect to Grothendieck topology (Proposition 3.1.6 of \cite{Bosch}), it follows that there is a homeomorphism between $(Sp B)(\varphi^{*}(f_{1}),\dots,\varphi^{*}(f_{r}))$ and $(Sp A)(f_{1},\dots,f_{r})$. We finish the shrinking step.
Let $U_{1}=V_{0}^{\prime}\subset (\mathbb{G}_{m}^{2g})^{an}$ in the diagram (\ref{(3.0.3)}). Assume that \[U_{0}=(Sp B)(\varphi^{*}(f_{1}),\dots,\varphi^{*}(f_{r}))\] after shrinking. Note that $\alpha^{an}|_{U_{0}}: U_{0} \longrightarrow U_{1}$ is a morphism of affinoid spaces and the logarithm provides the local homeomorphism, the morphism \[l: H^{1}(A,\mathcal{O})\otimes \mathbb{G}_{a} \longrightarrow (\mathbb{G}_{a}^{2g})^{an}\] gives the morphism of corresponding affinoid algebras from the commutative diagram (\ref{(3.0.2)}) on the point level. Using Lemma 5.3.3 of \cite{Bosch}, we deduce that $\alpha^{an}|_{U_{0}}$ is rigid analytic as desired.
Now we have proved the following theorem.
\begin{theorem}
Assume that $(Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an}$ is topologically $p$-torsion. Then we may enhance the set-theoretical map
\[ \alpha: Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \longrightarrow Hom_{c}(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*}) \] to the rigid analytic morphism
\[ \alpha^{an}: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \longrightarrow
(\mathbb{G}_{m}^{2g})^{an} \]
\end{theorem}
\section{Higgs bundle case}
In this section, we prove that the morphism from the moduli space of the Higgs bundles to the moduli space of representations is locally rigid analytic with small conditions.
Let us begin with the construction of expontential map from $\mathbb{C}_{p}$ to $\mathbb{C}_{p}^{*}$. Observe that the series $\sum_{k \geq 0} \frac{x^{k}}{k!}$ converges precisely when $|x|<r_{p}=|p|^{\frac{1}{p-1}}$. By analogy with the classical case, we write
\[ exp(x)=\sum_{k \geq 0} \frac{x^{k}}{k!} \]
for the sums of these series whenever they converge. We deduce the property that $exp(x+y)=exp(x)\cdot exp(y)$ for $|x|<r_{p}$ and $|y|<r_{p}$. It is natural to try to construct a homomorphism
\[ f: \mathbb{C}_{p} \longrightarrow \mathbb{C}_{p}^{*} \]
extending the exponential defined above by a series expansion. If such an extension exists, when $x\in \mathbb{C}_{p}$ we can choose a high power $p^{n}$ of $p$ so that
$p^{n}x \in B_{<r_{p}}$ and then
\[ f(x)^{p^{n}}=f(p^{n}x)=exp(p^{n}x). \]
In other words, $f(x)$ has to be a $p^{n}$th root of $exp(p^{n}x)$ in the algebraically closed field $\mathbb{C}_{p}$. This can be done in a coherent way, thus furnishing a continuation of the expontential homomorphism.
Denote the maximal ideal $\{x \in \mathbb{C}_{p}: |x|<1 \}$ of the local ring $\{ x \in \mathbb{C}_{p}: |x| \leq 1 \}$ by $M_{p}$.
\begin{proposition}(Proposition on p.258 \cite{Robert})
There is a continuous homomorphism $Exp: \mathbb{C}_{p} \longrightarrow 1+M_{p}$ extending the exponential mapping, originally defined only on the ball $B_{<r_{p}} \subset \mathbb{C}_{p}$.
\end{proposition}
\begin{remark}
(1) The definition of $Exp$ is indeed not canonical. The possibility of extending the exponential to the whole $\mathbb{C}_{p}$ had been shown by \cite{Durix}.
(2) The diffference between $exp$ and $Exp$ is simply the domain of definition, which can be summarized in the following commutative diagram
\[\xymatrix{
B_{<r_{p}} \ar@{^{(}->}[d] \ar[r]^{exp} & 1+B_{<r_{p}}\ar@{^{(}->}[d]\\
\mathbb{C}_{p} \ar[r]^{Exp} & 1+M_{p}
}\]
\end{remark}
\subsection{Higgs field gives rise to representations}
Now we present the construction of the map
\[ \Gamma(X,\Omega_{X}^{1}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}(-1) \longrightarrow Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}^{*}) \]
Before the construction, we remark that there is no Galois action on the Tate twist $\Gamma(X,\Omega_{X}^{1}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}(-1)$. The above map is the exponential of a $\mathbb{C}_{p}$-linear map into the Lie algebra of the group of representations. More precisely, the $\mathbb{C}_{p}$-linear map is
\[ \Gamma(X,\Omega_{X}^{1}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}(-1) \longrightarrow (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)) \oplus (H^{1}(X,\mathcal{O}_{X}) \otimes \mathbb{C}_{p}) \]
where the first component is the identity, and the second one depends on the choice of lifting of $X$ to $A_{2}(V)$\footnote{The definition of $A_{2}(V)$ is defined as follows: Let $V$ be a complete DVR with perfect residue field $k$ of char $p>0$ and fraction field $K$ of char $0$, then the Fontaine's ring $A_{inf}(V)$ surjects onto the $p$-adic completion $\widehat{\overline{V}}$ and the kernel is a principal ideal with the generator $\xi$. We define $A_{2}(V)$ to be $A_{inf}(V)/(\xi^{2})$.} by \cite{Faltings 2005} (line 16-17 of p.861). Using the Hodge-Tate decomposition, we have the isomorphism
\[ H^{1}_{\acute{e}t}(X,\overline{\mathbb{Q}_{p}}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p} \simeq (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)) \oplus (H^{1}(X,\mathcal{O}_{X}) \otimes \mathbb{C}_{p}).\]
Note that $Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}) \simeq H^{1}_{\acute{e}t}(X,\overline{\mathbb{Q}_{p}}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}$, then the exponential map from $\mathbb{C}_{p}$ to $\mathbb{C}_{p}^{*}$ induces the map
\[ Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}) \longrightarrow Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}^{*}). \]
Composing the maps together, we get the desired map.
\subsubsection{Exp is not rigid analytic}
\begin{lemma}
The series $\sqrt[p]{1+x}=\sum_{k=0}^{\infty} \binom{\frac{1}{p}}{k}x^{k}$ converges when $|x|<|p|^{\frac{p}{p-1}}$.
\end{lemma}
\begin{proof}
By strong triangle inequality, convergence of the series $\sum_{k=0}^{\infty} \binom{\frac{1}{p}}{k}x^{k}$ is equivalent to the condition $|\binom{\frac{1}{p}}{k}x^{k}| \rightarrow 0$. Note that the order $ord_{p}(k!)=\frac{k-S_{p}(k)}{p-1}$ where $S_{p}(k)$ is the sum of digits with respect to $p$-adic expansion of $k$, one has
\[ |\binom{\frac{1}{p}}{k}x^{k}|=|p|^{k(ord_{p}(x)-\frac{p}{p-1})+\frac{S_{p}(k)}{p-1}}.\]
Since $S_{p}(k)=1$ when $k=p^{j}$ is a power of $p$, we have
\[ |\binom{\frac{1}{p}}{k}x^{k}| \rightarrow 0 \Longleftrightarrow k(ord_{p}(x)-\frac{p}{p-1}) \rightarrow \infty, \]
and this happens precisely when $ord_{p}(x)-\frac{p}{p-1} >0$, namely when
\[ ord_{p}(x) >\frac{p}{p-1}, \quad |x|<|p|^{\frac{p}{p-1}}\]
as asserted.
\end{proof}
\begin{proposition}
The exponential map $Exp$ is not rigid analytic for every choice of $Exp$.
\end{proposition}
\begin{proof}
The main idea of the proof is to illustrate that we can not extend $exp$ to $Exp$ rigid analytically by progressively increasing balls.
For any $x \in \mathbb{C}_{p}$, there is an integer $n$ such that $p^{n}x \in B_{<r_{p}}(0)$. Thus $Exp$ is defined on the ball $B_{<|p|^{-n}r_{p}}(0)$, where the integer $n$ depends on $x$. We make the construction inductively as follows.
Since $Exp$ is a group homomorphism, we obtain $Exp(px)=Exp(x)^{p}$. Thus $Exp(x)=\sqrt[p]{Exp(px)}$. Set $exp=(Exp)_{0}$. Construct $(Exp)_{1}$ as the compositions of the maps
\[ \xymatrix{B_{|p|^{-1}r_{p}} \ar[r]^{\times p} &B_{<r_{p}} \ar[r]^{exp}&1+B_{<r_{p}} \ar[r]^{\sqrt[p]{1+x}} &1+M_{p}}.\]
Inductively, we can construct $(Exp)_{n}$ on the ball $B_{<|p|^{-n}r_{p}}(0)$ from $(Exp)_{n-1}$. We claim that $(Exp)_{1}$ is not rigid analytic. For $y \in B_{<|p|^{-1}r_{p}}(0)$, let $exp(py)=1+x$ for some $x \in B_{<r_{p}}(0)$. It implies that
\[ |x|=|log(1+x)|=|py|<r_{p}.\]
But by the Lemma 4.1.1, $\sqrt[p]{1+x}$ converges for $|x|<|p|^{\frac{p}{p-1}}<r_{p}$. Thus $(Exp)_{1}$ is not in a certain Tate algebra $\mathbb{C}_{p} \langle x \rangle$.
Note that the exponential map $Exp$ is locally analytic. In fact, fix $x_{0} \in \mathbb{C}_{p}$. For $x \in B_{<r_{p}}(x_{0})$, we obtain the local expansion
\[Exp(x)=Exp(x_{0})exp(x-x_{0})=Exp(x_{0})\sum_{k=0}^{\infty} \frac{(x-x_{0})^{k}}{k!}\] as wanted. Thus for any extension $\varphi$ of $exp$ to a continuous group homomorphism $\mathbb{C}_{p} \longrightarrow (1+M_{p})^{\times}$, $\varphi$ is not rigid analytic.
It follows that $(Exp)_{1}$ is not rigid analytic as claimed.
Construct $(Exp)_{2}$ as the compositions of the maps
\begin{align}\label{(4.1.1)}
{\xymatrix{B_{|p|^{-2}r_{p}} \ar[r]^{\times p} &B_{|p|^{-1}r_{p}} \ar[r]^{(Exp)_{1}} &1+M_{p} \ar[r]^{\sqrt[p]{1+x}} &\mathbb{C}_{p}^{*}}.}
\end{align}
Since $(Exp)_{1}$ is not in a certain Tate algebra, we have $(Exp)_{2}$ is not in a certain Tate algebra by the above commutative diagram (\ref{(4.1.1)}).
By the construction of $(Exp)_{n}$, we can not extend $exp$ to $Exp$ rigid analytically. Hence $Exp$ is not rigid analytic.
\end{proof}
\subsubsection{Small condition}
Since the exponential map $Exp$ is not rigid analytic as shown in Proposition 4.1.2, we would like to add small conditions, which is the restriction of $Exp$ on the open ball $\{x \in \mathbb{C}_{p}: |x|<r_{p} \}$. Under the small condition, the exponential map is rigid analytic.
\begin{definition}
The Higgs bundle $(E, \theta)$ on $X$ is called \textit{small} if $p^{\alpha} | \,\theta$ for some $\alpha>\frac{1}{p-1}$.
\end{definition}
\begin{definition}
The representation $\rho \in Hom(\pi_{1}^{ab}(X),\textbf{o}^{*})$ is called \textit{small} if $\rho(\gamma_{i}) \equiv 1 \,(mod\, p^{2\beta})$, where $\gamma_{1},\dots, \gamma_{2g}$ are any set of generators of $\pi_{1}^{ab}(X)$. \end{definition}
By Faltings' construction in \cite{Faltings 2005}, there is a one-to-one correspondence between small rank one Higgs bundles of degree $0$ and small representations. In the rest of the article, we denote small Higgs field by
$(\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1))_{small}$ and denote small representations by $Hom(\pi_{1}^{ab}(X), \textbf{o}^{*})_{small}$. The difference are the following: for small Higgs fields, consider the multiplication on Higgs fields by $p^{\alpha}$ for some $\alpha>\frac{1}{p-1}$. For small representations, denote by $E^{2g}$ by the $2g$-dimensional closed unit polydisc. Let $Y=E^{2g} \cap (\mathbb{G}_{m}^{2g})^{an}$, note that the elements of $p^{2\beta}\textbf{o}$ are topologically nilpotent, it follows that the surjective map $\textbf{o} \longrightarrow \textbf{o}/p^{2\beta}\textbf{o}$ induces the reduction map $\pi: Y \longrightarrow \widetilde{Y}$ (for the definition of the reduction map, we refer to Section 7.1.5 of \cite{BGR}). Then $U=\pi^{-1}(\{1\})$ is a tube\footnote{The definition of the tube is defined as follows: Let $X$ be a rigid analytic space and $x$ is the point of $X$. Denote the image of $x$ under the reduction map $red: X\longrightarrow \widetilde{X}$ by $\widetilde{x}$. Define the tube of $\widetilde{x}$ in $X$ to be $red^{-1}(\widetilde{x})$.} of 1 in $Y\subset (\mathbb{G}_{m}^{2g})^{an}$ and $(U,\mathcal{O}_{U})$ is a rigid analytic subspace of $(\mathbb{G}_{m}^{2g})^{an}$.
The $\textbf{o}$-points of $U$, denote by $U(\textbf{o})$, is the small representations of $Hom(\pi_{1}^{ab}(X), \textbf{o}^{*})$. It follows that the $p$-adic Simpson correspondence is defined on the open subspace $(U,\mathcal{O}_{U})$ under small conditions. In the following, we fix the parameters $\alpha$ and $\beta$ as defined in Definition 4.1.3 and Definition 4.1.4 respectively. The relationship between $\alpha$ and $\beta$ is $2\beta=\alpha+\frac{1}{p-1}$.
\begin{proposition}
One can enhance the map
\[ \Gamma(X, \Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1) \longrightarrow Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})\] to the morphism of rigid analytic spaces
\[ \Gamma(X,\Omega_{X}^{1}) \otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a} \longrightarrow (\mathbb{G}_{m}^{2g})^{an}. \]
corresponding to the small representations of $Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})$.
\end{proposition}
\begin{proof}
For a small Higgs field $\theta \in \Gamma(X, \Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)$, we have $p^{\alpha} | \,\theta$ for some $\alpha>\frac{1}{p-1}$. Denote the $\mathbb{C}_{p}$-linear map
\[ \Gamma(X,\Omega_{X}^{1}) \otimes_{\overline{\mathbb{Q}_{p}}} \mathbb{C}_{p}(-1) \longrightarrow (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1)) \oplus (H^{1}(X,\mathcal{O}_{X}) \otimes \mathbb{C}_{p}) \]
by $\beta$. Thus $p^{\alpha}|\,\beta(\theta)$ and $|\beta(\theta)|<r_{p}=|p|^{\frac{1}{p-1}}$. Since the difference between $exp$ and $Exp$ is simply domain of definition, it follows that $Exp(\beta(\theta))=exp(\beta(\theta))$. Note that $\beta$ is linear and defined by polynomials, thus algebraic, then $\beta$ is rigid analytic via the rigid analytification functor. Hence $exp\circ \beta$ is rigid analytic as desired.
\end{proof}
\begin{remark}
Because of the smallness condition, the non-uniqueness of the extension $Exp$ is not a problem in the proof of Proposition 4.1.5.
\end{remark}
\subsection{Conclusion}
As our main result, we show that the $p$-adic Simpson correspondence for line bundles is rigid analytic under suitable conditions.
By Theorem 3.2.6 and Proposition 4.1.5, it turns out that
\[ f: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X,\Omega_{X}^{1})\otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}) \longrightarrow U \subset (\mathbb{G}_{m}^{2g})^{an} \]
is rigid analytic. On the point level, the map
\[ Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \times (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1))_{small} \longrightarrow Hom(\pi_{1}^{ab}(X),\mathbb{C}_{p}^{*})_{small} \] mapping $(L,\theta)$ to $\rho$ is a homeomorphism by \cite{Faltings 2005}.
\begin{theorem}
Under some mild conditions in Theorem 3.2.6 and Proposition 4.1.5 and the assumption that \[f: (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X,\Omega_{X}^{1})\otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}) \longrightarrow (\mathbb{G}_{m}^{2g})^{an}\] is an affinoid morphism, we deduce that the $p$-adic Simpson correspondence for line bundles
\[ Hom(\pi_{1}^{ab}(X), \mathbb{C}_{p}^{*})_{small} \longrightarrow Pic^{0}_{X/\overline{\mathbb{Q}_{p}}}(\mathbb{C}_{p}) \times (\Gamma(X,\Omega_{X}^{1}) \otimes \mathbb{C}_{p}(-1))_{small} \] can be enhanced to the rigid analytic morphism defined on the open rigid analytic subspace $U \subset (\mathbb{G}_{m}^{2g})^{an}$
\[ (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X, \Omega_{X}^{1}) \otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}). \]
\end{theorem}
\begin{proof}
By the previous argument, $f$ is rigid analytic. It suffices to prove that
\[ f^{-1}: (\mathbb{G}_{m}^{2g})^{an} \longrightarrow (Pic^{0}_{X/\overline{\mathbb{Q}_{p}}})^{an} \times (\Gamma(X,\Omega_{X}^{1}) \otimes \overline{\mathbb{Q}_{p}}(-1) \otimes \mathbb{G}_{a}) \]
is rigid analytic. Since $f$ is an affinoid morphism, one can reduce to the affinoid case. Set $X=Sp B$ and $Y=Sp A$. Let $f: (X,\mathcal{O}_{X}) \longrightarrow (Y,\mathcal{O}_{Y})$ be the morphism of locally $G$-ringed spaces. Then there is a bijection between morphisms of affinoid spaces $X \longrightarrow Y$ and morphisms of locally $G$-ringed spaces $(X,\mathcal{O}_{X}) \longrightarrow (Y,\mathcal{O}_{Y})$. One can view $f:X \longrightarrow Y$ as the morphism of affinoid spaces. Note that the category of affinoid spaces is the opposite category of affinoid algebras. Let $\varphi: A \longrightarrow B$ be the associated morphism of affinoid algebras. It follows that $\varphi^{-1}: B \longrightarrow A$ is the morphism of affinoid algebras. Thus $f^{-1}$ is rigid analytic.
\end{proof} | {"config": "arxiv", "file": "2010.10825.tex"} |
TITLE: Are normal subgroups of finite index in an absolute Galois groups open?
QUESTION [3 upvotes]: Let $G$ be a profinite group. It is known that open normal subgroups of $G$ have finite index, but in general not every normal subgroup of finite index is open (see https://ncatlab.org/nlab/show/profinite+group#examples for a counterexample). This last fact is true if the profinite group is (topologically) finitely generated (this is a theorem of Nikolov–Segal).
Let now $K$ be a field. A theorem about the "inverse Galois problem" states that for every profinite group $G$ there are fields $K\subseteq L\subseteq F$ such that $Gal(F/L)\cong G$. This means that also for general Galois groups, it is not true that every normal subgroup of finite index is open.
My question is: could this be always true for absolute Galois groups, i.e. for Galois groups of extensions $\bar{K}/K$ with $K$ field and $\bar{K}$ the separable closure of $K$?
My question is justified by a sentence I found in Silverman's book "The Arithmetic of Elliptic Curves" where, at the beginning of the appendix B.2, it is written
"Let $G_{\bar{K}/K}$ be the Galois group of $\bar{K}/K$. (...) Thus $G_{\bar{K}/K}$ is a profinite group, i.e. an inverse limit of finite groups. As such, it comes equipped with a topology in which a basis of open sets around the identity consists of the collection of normal subgroups having finite index in $G_{\bar{K}/K}$".
REPLY [1 votes]: The paragraph you cited from Silverman is referring to the following fact: In a profinite group, the collection of open normal subgroups form a base of neighborhoods of the identity. (Lemma 1-17, p.29 in Fourier Analysis on Number Fields).
Here on a Galois group, Silverman considers the Krull topology. Concerning the stabilizer, I think he should require the stabilizer to be open rather than just of finite index. | {"set_name": "stack_exchange", "score": 3, "question_id": 3734250} |
\begin{document}
\begin{abstract}
Let $Q$ be a tame quiver of type $\widetilde{\mathbb{A}}_n$ and
$\Rep(Q)$ the category of finite dimensional representations over
an algebraically closed field. A representation is simply called a
module. It will be shown that a regular string module has, up to isomorphism,
at most two Gabriel-Roiter submodules.
The quivers $Q$ with sink-source orientations will be characterized as
those, whose central parts do not contain
preinjective modules.
It will also be shown
that there are only finitely many (central) Gabriel-Roiter measures admitting no
direct predecessors. This fact will be generalized for all tame quivers.
\end{abstract}
\maketitle
\footnotesize{{\it Mathematics Subject Classification} (2010). 16G20,16G70}
\section{Introduction}
Let $\Lambda$ be an Artin algebra and $\mod\Lambda$ the category of
finitely generated left $\Lambda$-modules. For each
$M\in\mod\Lambda$, we denote by $|M|$ the length of $M$. The symbol
$\subset$ is used to denote proper inclusion. The Gabriel-Roiter (GR
for short) measure $\mu(M)$ for a $\Lambda$-module $M$ was defined to be a
rational number
in \cite{R2} by induction as follows:
$$\mu(M)=\left\{\begin{array}{ll}
0 & \textrm{if $M=0$};\\
\max_{N\subset M}\{\mu(N)\} & \textrm{if $M$ is decomposable};\\
\max_{N\subset M}\{\mu(N)\}+\frac{1}{2^{|M|}} & \textrm{if $M$ is indecomposable}.\\
\end{array}\right.$$
(In later discussion, we will use the original definition for our
convenience, see \cite{R1} or Section \ref{GRdef} below.) The
so-called Gabriel-Roiter submodules of an indecomposable module are
defined to be the indecomposable proper submodules with maximal GR
measure.
A rational number $\mu$ is called a GR measure for an algebra $\Lambda$ if
there is an indecomposable $\Lambda$-module $M$ such that $\mu(M)=\mu$.
Using the Gabriel-Roiter measure, Ringel obtained a partition of the
module category for any Artin algebra of infinite representation
type \cite{R1,R2}: there are infinitely many GR measures $\mu_i$ and
$\mu^i$ with $i$ natural numbers, such that
$$\mu_1<\mu_2<\mu_3<\ldots\quad \ldots <\mu^3<\mu^2<\mu^1$$ and such that any
other GR measure $\mu$ satisfies $\mu_i<\mu<\mu^j$ for all $i,j$.
The GR measures $\mu_i$ (resp. $\mu^i$) are called take-off (resp.
landing) measures. Any other GR measure is called a central measure.
An indecomposable module is called a take-off (resp. central,
landing) module if its GR measure is a take-off (resp. central,
landing) measure.
To calculate the GR measure of a given indecomposable module, it is
necessary to know the GR submodules. Thus it is interesting to know
the number of the isomorphism classes of the GR submodules for a
given indecomposable module. It was conjectured that for a
representation-finite algebra (over an algebraically closed field),
each indecomposable module has at most three GR submodules. In
\cite{Ch1}, we have proved the conjecture for representation-finite
hereditary algebras. In this paper, we will start to study the GR
submodules of string modules. In particular, we will show in Section
\ref{string}
that each string module, which contains no band submodules, has at
most two GR submodules, up to isomorphism. As an application, we
show for a tame quiver $Q$ of type $\widetilde{\mathbb{A}}_n$, $n\geq 1$, that
if an indecomposable module on an exceptional regular component of the Auslander-Reiten quiver
has, up to isomorphism, precisely two GR
submodules, then one of the two GR inclusions is an irreducible
monomorphism. A description of the numbers of the GR submodules will
also be presented there.
Let $\mu,\mu'$ be two GR measures for $\Lambda$. We call $\mu'$ a
{\bf direct successor} of $\mu$ if, first, $\mu<\mu'$ and second,
there does not exist a GR measure $\mu''$ with $\mu<\mu''<\mu'$. The
so-called {\bf Successor Lemma} in \cite{R2} states that any
Gabriel-Roiter measure $\mu$ different from $\mu^1$ has a direct
successor. There is no `Predecessor Lemma'. It is
clear that any GR measure different from $\mu_1$, the minimal one,
over a representation-finite Artin algebra
has a direct predecessor. In this paper, we start to study the GR measures
admitting no direct predecessors
for representation-infinite algebras.
An ideal test class are the path algebras of tame quivers, as the representation
theory of this category is very well understood and many first properties of the GR-measures
for these algebras are already known
\cite{Ch4}. Among them, the quivers of type
$\widetilde{\mathbb{A}}_n$, $n\geq 1$,
are of special interests because the GR submodules of an indecomposable module
can be in some sense easily determined (see \cite{Ch3}, or Proposition \ref{bigprop} below).
In Section \ref{predecessors}, it will be shown that for a quiver of type $\widetilde{\mathbb{A}}_n$
there are only finitely many GR measures admitting no direct predecessors.
This gives rise to the following question: does a
representation-infinite (hereditary) algebra (over an algebraically
closed field) being of tame type imply that there are only finitely
many GR measures having no direct predecessors and vice versa?
The proof of the above mentioned result for quivers of
type $\widetilde{\mathbb{A}}_n$ can be generalized to those of types
$\widetilde{\mathbb{D}}_n$ ($n\geq 4$), $\widetilde{\mathbb{E}}_6$,
$\widetilde{\mathbb{E}}_7$ and $\widetilde{\mathbb{E}}_8$.
Thus we partially answer the above question.
It was shown in \cite{R1} that all landing modules are preinjective
modules in the sense of Auslander-Smal\o\ \cite{AS}. However, not
all preinjective modules are landing modules in general. It is
interesting to study the preinjective modules, which are in the central
part. In Section \ref{precen}, We will show that for a tame quiver $Q$ of type
$\widetilde{\mathbb{A}}_n$, if there is a preinjective central
module, then there are actually infinitely many of them. However, it is
possible that the central part does not contain any preinjective
module. We characterize the tame quivers of type
$\widetilde{\mathbb{A}}_n$ with this property. In particular, we
show that the quiver $Q$ of type $\widetilde{\mathbb{A}}_n$ is
equipped with a sink-source orientation if and only if any
indecomposable preinjective module is either a landing module or a
take-off module.
Throughout, we fix an algebraically closed field $k$ and by algebras, representations or modules
we always mean finite dimensional ones over $k$, unless stated otherwise.
\section{Preliminaries and known results}\label{preliminaries}
\subsection{The Gabriel-Roiter measure}\label{GRdef}
We first recall the original definition of the Gabriel-Roiter measure
\cite{R1}. Let $\mathbb{N}$=$\{1,2,\ldots\}$ be the set of
natural numbers and $\mathcal{P}(\mathbb{N})$ be the set of all
subsets of $\mathbb{N}$. A total order on
$\mathcal{P}(\mathbb{N})$ can be defined as follows: if $I$,$J$
are two different subsets of $\mathbb{N}$, write $I<J$ if the
smallest element in $(I\backslash J)\cup (J\backslash I)$ belongs to
J. Also we write $I\ll J$ provided $I\subset J$ and for all elements
$a\in I$, $b\in J\backslash I$, we have $a<b$. We say that $J$ {\bf
starts with} $I$ if $I=J$ or $I\ll J$.
Let $\Lambda$ be an Artin algebra and $\mod\Lambda$ be the category
of finitely generated left $\Lambda$-modules. For each
$M\in\mod\Lambda$, let $\mu(M)$ be the maximum of the sets
$\{|M_1|,|M_2|,\ldots, |M_t|\}$, where $M_1\subset M_2\subset \ldots
\subset M_t$ is a chain of indecomposable submodules of $M$. We call
$\mu(M)$ the {\bf Gabriel-Roiter measure} of $M$. If $M$ is an
indecomposable $\Lambda$-module, we call an inclusion $T\subset M$
with $T$ indecomposable a {\bf GR inclusion} provided
$\mu(M)=\mu(T)\cup\{|M|\}$, thus if and only if every proper
submodule of $M$ has Gabriel-Roiter measure at most $\mu(T)$. In
this case, we call $T$ a {\bf GR submodule} of $M$.
A subset $I$ of $\mathbb{N}$ is called a GR measure of $\Lambda$ if there is
an indecomposable module $M$ with $\mu(M)=I$.
\begin{remark} Although in the introduction we define
the Gabriel-Roiter measure in a different way, these two definitions (orders) coincide.
In fact, for each
$I=\{a_i|i\}\in\mathcal{P}(\mathbb{N})$, let
$\mu(I)=\sum_{i}\frac{1}{2^{a_i}}$. Then $I<J$ if and only if
$\mu(I)<\mu(J)$.
\end{remark}
We denote by $\mathcal{T}$, $\mathcal{C}$ and $\mathcal{L}$ the
collection of indecomposable $\Lambda$-modules, which are in the take-off part,
the central part and the landing part, respectively. Now we present one result
concerning Gabriel-Roiter measures, which will be used later on.
For more basic properties we refer to \cite{R1,R2}.
\begin{prop}\label{epi} Let $\Lambda$ an Artin algebra and $X\subset M$ be a GR
inclusion.
\begin{itemize}
\item[(1)] If $\mu(X)<\mu(Y)<\mu(M)$,
then $|Y|>|M|$.
\item[(2)] There is an irreducible monomorphism $X\ra Y$ with $Y$
indecomposable and an epimorphism $Y\ra M$.
\end{itemize}
\end{prop}
The first statement is a direct consequence of the definition. For a
proof of the second statement, we refer to \cite[Proposition
3.2]{Ch2}.
\subsection{}Let $Q$ be a tame quiver of type
$\widetilde{\mathbb{A}}_n,n\geq 1$, $\widetilde{\mathbb{D}}_n,n\geq 4$, $\widetilde{\mathbb{E}}_6$,
$\widetilde{\mathbb{E}}_7$ or $\widetilde{\mathbb{E}}_8$, and $\Rep(Q)$ the category of
finite dimensional representations over an algebraically closed
field. We simply call the representations in $\Rep(Q)$ modules. We
briefly recall some notations and refer to \cite{ARS, DR} for
details. If $X$ is a quasi-simple module, then there is a unique
sequence $X=X_1 \ra X_2\ra \ldots\ra X_r\ra\ldots$ of irreducible
monomorphisms. Any indecomposable regular module $M$ is of the
form $M\cong X_i$ with $X$ a quasi-simple module (quasi-socle of
$M$) and $i$ a natural number (quasi-length of $M$). The rank of an
indecomposable regular module $M$ is the minimal positive integer
$r$ such that $\tau^rM=M$, where $\tau=DTr$ denotes the Auslander-Reiten translation.
A regular component (standard stable tube) of the
Auslander-Reiten quiver of $Q$ is called exceptional if its rank
(the rank of any quasi-simple module on it) $r>1$. If $X$ is
quasi-simple of rank $r$, then the dimension vector $\udim
X_r=\delta=\sum_{i=1}^r\tau^rX$, where $\delta$ is the minimal
positive imaginary root. Let $|\delta|=\sum_{\nu\in Q}
\delta_\nu$. In particular, if $Q$ is of type $\widetilde{\mathbb{A}}_n$, then
$\delta_\nu=1$ for each $\nu\in Q$ and $|\delta|=n+1$. A quasi-simple module of rank 1 will be called a
homogeneous quasi-simple module. We denote by $H_i$ an indecomposable
homogeneous module with quasi-length $i$. (There are infinitely many
homogeneous tubes. However, the GR measure $\mu(H_i)$ does not depend on the
choice of $H_i$.) We denote by
$\mathcal{P}$, $\mathcal{R}$ and $\mathcal{I}$ the collection of
indecomposable preprojective, regular and preinjective modules,
respectively.
We collect some known facts in the following proposition, which will
be quite often used in our later discussion. The proofs can be found
in \cite[Section 3]{Ch3}.
\begin{prop}\label{bigprop} Let $Q$ be a tame quiver of type $\widetilde{\mathbb{A}}_n$, $n\geq 1$.
\begin{itemize}
\item[(1)] Let $\iota:T\subset M$ be a GR inclusion.
\begin{itemize}
\item[a] If $M\in\mathcal{P}$, then $\iota$ is an irreducible monomorphism.
\item[b] If $M\in\mathcal{R}$ is a quasi-simple module,
then $T\in\mathcal{P}$.
\item[c] If $M=X_i\in\mathcal{R}$ with $X$
quasi-simple and $i>1$, then $T\in\mathcal{P}$ or $T\cong X_{i-1}$.
\item[d] If $M\in\mathcal{I}$, then
$T\in\mathcal{R}$.
\end{itemize}
\item[(2)] If $X\in\mathcal{P}$, then $X\in\mathcal{T}$ and $\mu(X)<\mu(H_1)$.
\item[(3)] Let $H_1$ be a homogeneous quasi-simple module. Then $\mu(H_1)$ is a central measure and $\mu(M)>\mu(H_1)$
if $M\in\mathcal{I}$ satisfies $\udim M>\delta$.
\item[(4)] Let $X$ be a quasi-simple module of rank $r$. Then
\begin{itemize}
\item[a] If $\mu(X_r)< \mu(H_1)$, then
$\mu(X_i)<\mu(H_j)$ for all $i\geq 1$ and $j\geq 1$.
\item[b] If $\mu(X_r)\geq \mu(H_1)$, then $X_{i}$
is the unique GR
submodule of $X_{i+1}$ for every $i\geq r$. If in addition $r>1$, then
$\mu(X_i)>\mu(H_j)$ for all $i>r$ and $j\geq
1$.
\end{itemize}
\item[(5)] Let $\mathbb{T}$ be a stable tube of rank $r>1$. Then there is
a quasi-simple module $X$ on $\mathbb{T}$ such that
$\mu(X_r)\geq \mu(H_1)$.
\item[(6)] Let $S$ be a quasi-simple module of rank $r$ which is also a
simple module. Then $\mu(S_r)<\mu(H_1)$, and thus
$\mu(S_j)<\mu(H_1)$ for all $j\geq 1$.
\item[(7)] Let $M\in \mathcal{I}\setminus\mathcal{T}$ and $N$ be
a GR submodule of $M$. Thus $N\cong X_i$ for some quasi-simple module $X$ by (1)d. Then $\mu(M)>
\mu(X_j)$ for all $j\geq 1$.
\end{itemize}
\end{prop}
\begin{remark}
Some statements in Proposition \ref{bigprop} hold in general.
For example, the statements (2), (4) and (7) and the first argument of the statement
(3) also hold \cite{Ch4} for tame quivers of type
$\widetilde{\mathbb{D}}_n$, $\widetilde{\mathbb{E}}_6$,
$\widetilde{\mathbb{E}}_7$ and $\widetilde{\mathbb{E}}_8$.
The statement (5) is known to be true \cite{Ch5} for tame quivers which is not of the $\widetilde{\mathbb{E}}_8$ type.
\end{remark}
\begin{lemm}\label{onemap}
Let $Q$ be a tame quiver of type
$\widetilde{\mathbb{A}}_n$. Then for every indecomposable
preinjective module $M$, there is, in each regular component, precise
one quasi-simple module $X$ such that $\Hom(X,M)\neq 0$. In
particular, up to isomorphism, each indecomposable preinjective
module contains in each regular component at most one GR submodule.
\end{lemm}
\begin{proof} Let $M=\tau^sI_\nu$, where $I_\nu$ is an indecomposable
injective module corresponding to a vertex $\nu\in Q$. It is obvious
that there is a quasi-simple module $X$ on a given regular component such
that $\Hom(X,I_\nu)\neq 0$. Thus $\Hom(\tau^sX,M)\neq 0$. Assume
that $X$ and $Y$ are non-isomorphic quasi-simple modules on the same
tube such that $\Hom(X,M)\neq 0\neq \Hom(Y,M)$. Then
$\Hom(\tau^{-s}X,I_\nu)\neq 0\neq \Hom(\tau^{-s}Y,I_\nu)$. Thus
$(\udim\tau^{-s}X)_\nu\neq 0\neq (\udim\tau^{-s}Y)_\nu$, which is
impossible since $1=\delta_\nu\geq
(\udim\tau^{-s}X)_\nu+(\udim\tau^{-s}Y)_\nu$.
\end{proof}
\section{the number of GR submodules}\label{string}
As we have mentioned in the introduction, the number of the GR submodules of
a given indecomposable module over a representations-finite algebra
is conjectured to be bounded by three. In this section, we will show
for a string algebra that this number is always bounded by two for any
indecomposable string module containing no band submodules. We will also describe the numbers of the GR
submodules for different kinds of indecomposable modules over
tame quivers $Q$ of type $\widetilde{\mathbb{A}}_n$.
\subsection{String modules} We first recall what string modules are.
For details, we refer to \cite{BR}. Let $\Gamma$ be a string
algebra with underlying quiver $Q$. We denote by $s(C)$ and $e(C)$ the starting and the ending
vertices of a given string $C$, respectively. Let
$C=c_nc_{n-1}\cdots c_2c_1$ be a string, the corresponding string
module $M(C)$ is defined as follows: let $u_i=s(c_{i+1})$ for $0\leq
i\leq n-1$ and $u_n=e(c_n)$. For a vertex $\nu\in Q$, let
$I_\nu=\{i|u_i=\nu\}\subset\{0,1,\ldots,n\}$. Then the vector space $M(C)_\nu$
associated to $\nu$ satisfies that $\dim M(C)_\nu=|I_\nu|$
and has $z_i,i\in I_\nu$ as basis. If for $1\leq i\leq n$, the symbol $c_i$ is an arrow $\beta$,
define $\beta(z_{i-1})=z_i$. If for
$1\leq i\leq n$, the symbol $c_i$ is an
inverse of an arrow $\beta$, define $\beta(z_i)=z_{i-1}$.
Note that the indecomposable string modules are
uniquely determined by the underlying string, up to the equivalence
given by $C\sim C^{-1}$.
If $C=E\beta F$ is a string with $\beta$ an arrow, then the string
module $M(E)$ is a submodule of $M(C)$: let $E$ be of length $n$ and
$F$ be of length $m$. then $C$ has length $n+m+1$. If $M(C)$ is
given by $n+m+2$ vectors $z_0,z_1,\ldots,z_{n+m+2}$, it is obvious
that the space determined by the vectors $z_0,z_1,\ldots, z_n$ defines a
submodule, which is $M(E)$. The corresponding factor module is
$M(F)$. If $C=E\beta^{-1} F$ is a string with $\beta$ an arrow, we
may obtain similarly an indecomposable submodule $M(F)$ of $M(C)$ with factor
module $M(E)$.
\subsection{A``covering" of a string module} Let
$C=c_nc_{n-1}\cdots c_2c_1$ be a string. We associate
with $C$ a Dynkin quiver $\mathbb{A}_{n+1}$ as follows: the vertices
are $u_i$, and there is an arrow $u_{i-1}\stackrel{\alpha_i}{\ra}
u_i$ if $c_i$ is an arrow, and an arrow
$u_{i}\stackrel{\alpha_i}{\ra} u_{i-1}$ in case $c_i$ is an inverse
of an arrow. Let $M(C)$ be the string module and
$M_{\mathbb{A}}(C)$ be the unique sincere indecomposable
representation over $\mathbb{A}_{n+1}$.
Before going further, we introduce the follow lemma, which was proved in \cite{R2}.
\begin{lemm}\label{sing}
Let $M$ and $N$ be indecomposable modules over an Artin algebra $\Lambda$
and $\sing(N,M)$ be the set of all non-injective homomorphisms. If $N$ is a GR submodule of $M$,
then $\sing(N,M)$ is a subspace of $\Hom_\Lambda(N,M)$.
\end{lemm}
From now on, we assume that $M(C)$ is a string module over $\Gamma$
determined by a string $C$ such that $M(C)$ contains no band submodules.
Thus any submodule of $M$ is a string module.
\begin{lemm}\label{basis} Let $N$ be a GR submodule of $M(C)$.
Then there is a substring $C'$ of $C$, such that
the submodule $X$ of $M(C)$ determined by $C'$ is isomorphic to $N$.
\end{lemm}
\begin{proof}
Let $f: N\subset M(C)$ be the inclusion. Then $f$ is a linear combination of a basis described
in \cite{CB}, say $f_1,\ldots, f_t$. Since by Lemma \ref{sing} $\sing(N,M)$ is a subspace , there is an $1\leq i\leq t$ such
that $f_i$ is a monomorphism. By the description of the basis,
$X=\im f_i$ is a submodule of $M$ determined
by a substring $C'$ of $C$. It is obvious that $N\cong X$.
\end{proof}
By this lemma, we may assume without loss of generality that a GR submodule $N$ of $M(C)$ is always
given by a substring of $C$.
Thus we may obtain in an obvious way an indecomposable submodule $\mathcal{G}(N)$
of $M_\mathbb{A}(C)$ using the construction of the quiver $\mathbb{A}_{n+1}$.
\begin{remark}
Note that different monomorphisms $f_i$ in the basis in the proof of Lemma \ref{basis} may
give rise to different $\mathcal{G}(N)$, which are indecomposable and pairwise non-isomorphic.
They all have
the same length $|N|$ by the construction.
\end{remark}
More generally, let $N_1\subset N_2\subset \ldots N_s=N\subset N_{s+1}=M(C)$ be a GR filtration of $M(C)$.
By above discussion, we may assume that there is a sequence of substrings
$C_1\subset C_2\subset \ldots \subset C_s\subset C$ such that $N_i$
is determined by the substring $C_i$.
Thus we may define
$\mathcal{G}(N_i)$ using the construction of the quiver $\mathbb{A}_{n+1}$.
Therefore, we get inclusions of indecomposable modules over the quiver
$\mathbb{A}_{n+1}$: $\mathcal{G}(N_1)\subset \mathcal{G}(N_2)\subset \ldots\subset \mathcal{G}(N_s)\subset M_\mathbb{A}(C)$.
Conversely,
if $T$ is a submodule of $M_\mathbb{A}(C)$ with inclusion map $f$, then there is a natural way to get a string submodule
$\mathcal{F}(T)$ of $M(C)$. We may also denote the inclusion by $f:\mathcal{F}(T)\subset M$. Then
under the inclusion, $\mathcal{G}\mathcal{F}(T)=T$. It is easily seen that
$\mathcal{F}(\mathcal{G}(N)\cong N$.
Note that $\mathcal{F}$ preserves indecomposables, monomorphisms and lengths. In particular,
$\mu(T)\leq \mu(F(T))$.
\begin{lemm}\label{propF} We keep the notations as above.
\begin{itemize}
\item[(1)] For each $i$, $\mathcal{G}(N_i)$ is a GR submodule of $\mathcal{G}(N_{i+1})$ and $\mu(\mathcal{G}(N_i))=\mu(N_i)$.
\item[(2)] If $T$ is a GR submodule of $M_\mathbb{A}(C)$, then $\mathcal{F}(T)$ is a GR submodule of
$M$.
\end{itemize}
\end{lemm}
\begin{proof}
(1) We use induction.
Assume first that $i=1$ and that $\mathcal{G}(N_1)$ is not a GR submodule of $\mathcal{G}(N_2)$.
Let $X\subset \mathcal{G}(N_2)$ be a GR submodule and thus $\mathcal{F}(X)$ is isomorphic to a submodule of $N_2$. Since $\mathcal{F}$ preserves monomorphisms,
$\mu(N_1)\geq \mu(\mathcal{F}(X))\geq \mu(X)>\mu(\mathcal{G}(N_1)$. This is impossible since $N_1$ and $\mathcal{G}(N_1)$ are both simple modules. Thus $\mathcal{G}(N_1)$ is a GR submodule of $\mathcal{G}(N_{2})$.
It follows $\mu(\mathcal{G}(N_2))=\mu(N_2)$ since $|\mathcal{G}(N_2)|=|N_{2}|$.
Now assume that $\mu(\mathcal{G}(N_i))=\mu(N_i)$. Let $X$ be a GR submodule of $\mathcal{G}(N_{i+1})$.
Then $\mu(\mathcal{G}(N_i))\leq\mu(X)\leq \mu(F(X))\leq \mu(N_i)$. Therefore, $\mu(\mathcal{G}(N_i))=\mu(X)$
by induction and $\mathcal{G}(N_i)$ is a GR submodule of $\mathcal{G}(N_{i+1})$. Hence
$\mu(\mathcal{G}(N_{i+1}))=\mu(N_{i+1})$ since $|\mathcal{G}(N_{i+1})|=|N_{i+1}|$.
(2) Since $N$ is a GR submodule of $M(C)$, $\mu(T)=\mu(\mathcal{G}(N))=\mu(N)\geq\mu(\mathcal{F}(T))\geq \mu(T)$.
Thus $\mu(N)=\mu(\mathcal{F}(T))$ and $\mathcal{F}(T)$ is a GR submodule of $M(C)$.
\end{proof}
For a Dynkin quiver of type $\mathbb{A}$, we have shown in
\cite{Ch1} the following result:
\begin{prop} Let $Q$ be a Dynkin quiver of type $\mathbb{A}$. Then each
indecomposable module has at most two GR submodules and each factor
of a GR inclusion is a uniserial module.
\end{prop}
As a consequence of this proposition and Lemma \ref{propF}, we have
\begin{theo} Let $\Lambda$ be a string algebra and $M(C)$ be a string module
containing no band submodules. Then $M(C)$ contains, up to
isomorphism, at most two GR submodules and the factors of the GR
inclusions are uniserial modules.
\end{theo}
\begin{coro} If $\Lambda$ is a representation-finite string algebra,
then each indecomposable module has, up to isomorphism, at most two GR
submodules and the GR factors are uniserial.
\end{coro}
\subsection{} Now we assume that $Q$ is a tame quiver of type
$\widetilde{\mathbb{A}}_n$. Then every indecomposable regular module
with rank $r>1$ is a string module containing no band submodules, thus
has at most two GR submodules up to isomorphism by above theorem.
\begin{prop}\label{2gr} If an exceptional regular module has precisely two GR
submodules, then one of the GR inclusions is an irreducible map. In particular,
every exceptional regular module has at most one preprojective GR submodule, up to isomorphism.
\end{prop}
Before proving this proposition, we briefly recall
how the irreducible monomorphisms between string modules
look like. We refer to \cite{BR} for details.
Let $C=c_r\cdots c_2c_1$ be a string such that $c_r$ is an arrow.
If there is an arrow $\gamma$ with $e(\gamma)=e(c_r)$, let $D=d_t\cdots d_2d_1$
be a string with $s(D)=s(d_1)=s(\gamma)$ such that $d_i$ is an arrow for every $i$ and such that
$t\geq 0$ is maximal. Then the natural inclusion $M(C)\ra M(D\gamma^{-1} C)$ is an irreducible monomorphism.
Similarly, Let $C=c_r\cdots c_2c_1$ be a string such that $c_1$ is an arrow.
If there is an arrow $\gamma$ with $e(\gamma)=s(c_1)$, let $D=d_t\cdots d_2d_1$
be a string with $e(D)=e(d_t)=s(\gamma)$ such that $d_i$ is an inverse of an arrow for every $i$ and such that $t\geq 0$ is maximal. Then the natural inclusion $M(C)\ra M(C\gamma D)$ is an irreducible monomorphism.
Any irreducible monomorphism between string modules is of one of these forms.
\begin{proof} Let $M(C)$ be an exceptional regular module with
$C=c_m\cdots c_2c_1$, which has precisely (up to isomorphism) two
GR submodules. Then the module $M_\mathbb{A}(C)$ also has two GR
submodules, which are actually given by the irreducible monomorphism
$X\ra M_\mathbb{A}(C)$ and $Y\ra M_\mathbb{A}(C)$. By definition of
$M_\mathbb{A}(C)$, we may identify the arrows $\alpha_i$ or its
inverse in $\mathbb{A}_{m+1}$with $c_i$ in the string $C$ of
$\widetilde{\mathbb{A}}_n$. We may assume that $X$ is determine by
string $E$ and $M_\mathbb{A}(C)$ is determined by $F\alpha^{-1}E$,
where $F$ is a composition of arrows or a trivial path and $\alpha$
is an arrow. Then under the above identification, we have
$C=F\alpha^{-1}E$. Let $M(C)\ra M'$ be the unique irreducible
monomorphism with $M'$ determined by a string
$F'\beta^{-1}F\alpha^{-1}E$, where $ F'$ is a compositions of arrows
or a trivial path and $\beta$ is an arrow. Thus either the ending
vertex $e(F)$ is a sink, or $F$ is a trivial path. Again by the
description of irreducible monomorphism, the inclusion
$\mathcal{F}(X)\ra M(C)$ is still an irreducible map. Note that
$\mathcal{F}(X)$ is a GR submodule of $M(C)$ by Lemma \ref{propF}.
\end{proof}
\begin{remark} Let $Q$ be a tame quiver of type
$\widetilde{\mathbb{A}}_n$ and $M$ be a non-simple indecomposable
module. Let $gr(M)$ denote the number of the isomorphism classes of the GR submodules of $M$.
\begin{itemize}
\item [(1)] If $M$ is preprojective, each GR inclusion $X\subset M$
is namely an irreducible map (Proposition \ref{bigprop}(1)a).
In particular, $gr(M)\leq 2$ since
there are precisely two irreducible maps to $M$, which are monomorphisms.
\item[(2)] If $M$ is a quasi-simple module of rank $r>1$, then
$gr(M)=1$ since $M$ is uniserial.
\item[(3)] If $M$ is a non-quasi-simple regular module of rank $r>1$, then $gr(M)\leq 2$, and
one of the GR inclusion is irreducible in case $gr(M)=2$ (Proposition \ref{2gr}).
\item[(4)] If $M=X_i$ is a regular module with $i>1$, where $X$ is a
quasi-simple module of rank $r\geq 1$ with $\mu(X_r)\geq \mu(H_1)$,
then $gr(M)=1$ and the unique
GR inclusion is an irreducible map (Proposition \ref{bigprop}(4)b).
\item[(5)] If $M$ is preinjective, then $M$ contains, up to isomorphism,
at most one GR submodule in each regular component (Lemma \ref{onemap}). If
we identify the homogeneous modules $H_i$, then $gr(M)\leq 3$. Under the convention,
if $gr(M)=3$, then $M$ contains a homogeneous module $H_i$ and as well as an exceptional
module $X_j$ with rank $r>1$ as GR submodules. Note that $\mu(X_{mr})\neq \mu(H_s)$ for any $m>1$ and any $s>1$.
It follows that $i=1$ and $j=r$ and thus $\delta=\udim X_j=\udim H_i$. Therefore, $|M|<2|\delta|$ (Proposition \ref{epi}(2)).
\item[(6)] A homogeneous simple module $H_1$ may contains more GR submodules. For example,
if $n$ is odd and $Q$ is with a
sink-source orientation (see \cite[Example 3]{Ch3}),
then the GR measure of a homogeneous simple module is
$\mu(H_1)=\{1,3,5,\ldots, n,n+1\}$.
There are up to isomorphism $\frac{n+1}{2}$ indecomposable preprojective modules with length $n$ and
they are all non-isomorphic GR submodules of $H_1$.
In general, $gr(H_1)$ is bounded by the number of the indecomposable summands
of the projective cover of $H_1$.
\item[(7)] One may also define $gr(M)$ as the number of the dimension vectors of the GR submodules of $M$.
Then it is easily seen that $gr(M)\leq 2$ for each indecomposable module $M$, which is not a homogeneous quasi-simple module $H_1$.
\end{itemize}
\end{remark}
\section{Direct predecessor}\label{predecessors}
Let $\Lambda$ be an Artin algebra and $I$ and $J$ be two different GR measures for $\Lambda$.
Then $J$ is called a direct successor of $I$ if, first, $I<J$ and second, there does not
exist a GR measure $J'$ with $I<J'<J$. It is easily seen that if $J$
is the direct successor of $I$, then $J$ is a take-off (resp.
central, landing) measure if and only if so is $I$. Let $I^1$ be the
largest GR measure, i.e. the GR measure of an indecomposable
injective module with maximal length. It was proved in \cite{R2}
that any Gabriel-Roiter measure $I$ different from $I^1$ has a
direct successor. However, there are GR measures, which does not
admit a direct predecessor. By the construction of the take-off
measures and the landing measures \cite{R1}, the GR measures
having no direct predecessors are central measures.
\subsection{} From now on, we fix a tame quiver $Q$ of type
$\widetilde{\mathbb{A}}_n$. The
following proposition gives a GR measure possessing no direct
predecessor.
\begin{prop}\label{H1}The GR measure $\mu(H_1)$ of a homogeneous quasi-simple module $H_1$ has no
direct predecessor.
\end{prop}
\begin{proof}
Assume, to the contrary, that $\mu(M)$ is the
direct predecessor of $\mu(H_1)$ for some indecomposable module $M$.
Since $\mu(H_1)$ is a central measure, so is $\mu(M)$. It follows
that $M$ is not preprojective. Let $Y$ be a GR submodule of $H_1$.
Since $Y$ is preprojective, $\mu(Y)<\mu(M)<\mu(H_1)$ and thus
$|M|>|H_1|$ (Proposition \ref{epi}(1)). If $M$ is preinjective, then there is a monomorphism
$H_1\ra M$ because $|M|>|H_1|$, and hence $\mu(H_1)<\mu(M)$. This
contradiction implies that $M$ is a regular module. Assume that
$M=X_i$ for some quasi-simple module $X$ of rank $r>1$. Because
$|X_i|=|M|>|H_1|$, we have $i> r$. Therefore, $\mu(X_r)<\mu(M)<\mu(H_1)$.
It follows from that $\mu(M)<\mu(X_j)<\mu(H_1)$ for all $j>i$ (Proposition \ref{bigprop}(4)a). This is a
contradiction.
\end{proof}
\begin{prop}\label{prepre} Let $M\in\mathcal{I}\setminus\mathcal{T}$. If
$\mu(N)$ is the direct predecessor of $\mu(M)$ for some
indecomposable module $N$, then $N\in\mathcal{I}$ and $|N|>|M|$.
\end{prop}
\begin{proof}Since $\mu(N)$ is not a take-off measure, $N$ is not preprojective. Assume
for a contradiction that $N=Y_j$ is regular for some quasi-simple
module $Y$. Let $X_i$ be a GR submodule of $M$ for some
quasi-simple module $X$ and some $i\geq 1$ (Proposition \ref{bigprop}(1)d). Then $\mu(M)>\mu(X_t)$
for all $t\geq 1$ by
Proposition \ref{bigprop}(7) and thus $\mu(X_t)\neq \mu(Y_j)$ for any $t\geq 1$. Therefore $\mu(X_i)<\mu(Y_j)<\mu(M)$. It follows
that $|Y_j|>|M|$ and $\mu(M)<\mu(Y_{j+1})$ since $\mu(N)=\mu(Y_j)$
is the direct predecessor of $\mu(M)$. Notice that a GR submodule $T$
of $Y_{j+1}$ is either $Y_j$ or a preprojective module. In
particular $\mu(T)<\mu(M)<\mu(Y_{j+1})$. Thus $|M|>|Y_{j+1}|$ by Proposition \ref{epi}(1). This
contradicts $|N|=|Y_j|>|M|$. Therefore, $N$ is preinjective.
\end{proof}
\subsection{}A regular module $X_i$, $i>r$, for some
quasi-simple module $X$ of rank $r$ may contain a
preprojective module as a GR submodule. However, this cannot happen
if $\mu(X_r)\geq \mu(H_1)$, in which case the irreducible monomorphisms
$X_r\ra X_{r+1}\ra X_{r+2}\ra\ldots$ are GR inclusions (Proposition \ref{bigprop}(4)b).
The GR measure $\mu(X_r)$
for a quasi-simple module $X$ of rank $r$ is important when
comparing the GR measures of regular modules $X_i$ and those of
homogeneous modules $H_j$. Namely, there is a similar result that
can be used to compare the GR measures of two non-homogeneous
regular modules.
\begin{lemm}\label{2} Let $X,Y$ be quasi-simple modules of rank $r$ and $s$,
respectively. Assume that $\mu(X_r)\geq \mu(H_1)$.
\begin{itemize}
\item[(1)] If $\mu(X_r)>\mu(Y_s)$, then $\mu(X_i)>\mu(Y_j)$
for all $i\geq r$, $,j\geq 1$.
\item[(2)] If $\mu(X_i)=\mu(Y_j)$ for some $i\geq 2r$, then $r=s$
and $\mu(X_t)=\mu(Y_t)$ for every $t\geq r$.
\item[(3)] If $\mu(X_{2r})>\mu(Y_{2s})$, then
$\mu(X_i)>\mu(Y_j)$ for all $i\geq 2r, j\geq 1$.
\end{itemize}
\end{lemm}
\begin{proof}(1) If $\mu(Y_s)<\mu(H_1)$, then $\mu(Y_j)<\mu(H_1)$ for
all $j\geq 1$. Thus we may assume that $\mu(Y_s)\geq\mu(H_1)$.
Since for each $j\geq s$, $\mu(Y_j)$ starts with $\mu(Y_s)$ and
$|Y_s|=|X_r|=|\delta|$, we have $\mu(X_r)>\mu(Y_j)$.
(2)
By assumption, we have $j\geq 2s$ because $|Y_j|=|X_i|\geq 2|\delta|$.
Since $\mu(X_r)\geq \mu(H_1)$, we have $\mu(Y_s)\geq \mu(H_1)$ and the irreducible
monomorphisms in the sequences
$$X_r\ra X_{r+1}\ra X_{r+2}\ra\ldots,\,\ \textrm{and}\,\ Y_s\ra Y_{s+1}\ra Y_{s+2}\ra\ldots$$
are all
GR inclusions (Proposition \ref{bigprop}(4)).
It follows that
$$\begin{array}{rclll}
\mu(Y_{j})&=&\mu(Y_s)\cup\{|Y_{s+1}|,|Y_{s+2}|,\ldots,|Y_{2s-1}|,|Y_{2s}|,|Y_{2s+1}|,\ldots,|Y_{j}|\}&&\\
&=&\mu(X_r)\cup\{|X_{r+1}|,|X_{r+2}|,\ldots,|X_{2r-1}|,|X_{2r}|,|X_{2r+1}|,\ldots,|X_{i}|\}&=&\mu(X_i).
\end{array}$$
Since $|X_r|=|Y_s|=|\delta|$ and $|X_{2r}|=|Y_{2s}|=2|\delta|$,
so $\mu(X_r)=\mu(Y_s)$ and
$\mu(X_{2r})=\mu(Y_{2s})$. Note that
$$|X_{r+l}|-|X_{r+l-1}|=|Y_{r+l}|-|Y_{r+l-1}|$$ for all $l\geq 1$.
It follows that $r=s$ and $\mu(X_t)=\mu(Y_t)$ for all $t\geq r=s$.
(3) follows similarly.
\end{proof}
\begin{coro}Let $X$ be a quasi-simple module of rank $r$ such that
$\mu(X_r)\geq \mu(H_1)$. If $M$ is an indecomposable module such
that $\mu(M)=\mu(X_i)$ for some $i\geq 2r$, then $M$ is a regular
module.
\end{coro}
\begin{proof}
Assume, to the contrary, that $M$ is preinjective. Let $Y_t$ be a
GR submodule of $M$ for some quasi-simple module $Y$ of rank $s$.
Then $\mu(M)>\mu(Y_j)$ for all $j\geq 1$ by Proposition
\ref{bigprop}(7). Thus $Y\ncong X$ and $t\geq 2s$ since
$|M|=|X_i|>2|\delta|$ and since there is an epimorphism $Y_{t+1}\ra M$ by Proposition \ref{epi}(2).
Notice that $\mu(Y_s)\geq \mu(H_1)$. Otherwise,
we would have $\mu(Y_{t})<\mu(H_1)$. However, there is a monomorphism $H_1\ra M$ since
$|M|>2|\delta|$. We obtain a contradiction because $Y_t$ is a GR submodule of $M$.
Since $Y_t$ is a GR submodule of $N$ and $\mu(M)=\mu(X_i)$, so $\mu(Y_t)=\mu(X_{i-1})$.
Therefore, $r=s$ and
$\mu(Y_{t+1})=\mu(X_i)$ by above lemma. This contradicts
$|Y_{t+1}|>|M|=|X_i|$ (Proposition \ref{epi}(1)). Thus $M$ is regular.
\end{proof}
\subsection{}We have seen in Proposition \ref{bigprop}(4) that
the irreducible maps $H_1\ra H_2\ra H_3\ra\ldots$ are GR inclusions.
One can show more: namely, in \cite{Ch3} ( or \cite{Ch4} for
general cases) we proved that $\mu(H_{i+1})$ is the direct successor
of $\mu(H_i)$ for each $i\geq 1$. Let $X$ be a quasi-simple module of rank
$r>1$. It is possible (for example, if $\mu(X_r)\geq \mu(H_1)$) that
all irreducible maps $X_r\ra X_{r+1}\ra X_{r+2}\ra\ldots$ are GR
inclusions. However, it is not true in general that $\mu(X_{j+1})$
is the direct successor of $\mu(X_j)$ for all $j\geq r$ (\cite[Example 4]{Ch3}).
The following proposition tells if $\mu(X_r)\geq
\mu(H_1)$ and if $\mu(X_{j+1})$ is not the direct successor of
$\mu(X_j)$, then $j< 2r$.
\begin{prop}\label{ds} Let $X$ be a quasi-simple module of rank $r$ such that
$\mu(X_r)\geq \mu(H_1)$. Then $\mu(X_{j+1})$ is a direct successor
of $\mu(X_{j})$ for each $j\geq 2r$.
\end{prop}
\begin{proof}
We may assume $r>1$. We first show that there does not exist an
indecomposable regular module $M$ such that $\mu(M)$ lies between
$\mu(X_j)$ and $\mu(X_{j+1})$ for any $j\geq 2r$. Assume, to the contrary, that there exists a $j\geq 2r$ and an
indecomposable regular module $M$ with
$\mu(X_j)<\mu(M)<\mu(X_{j+1})$. We may assume that $|M|$ is
minimal. Then $|M|>|X_{j+1}|>2|\delta|$, since $X_j$ is a GR
submodule of $X_{j+1}$. Let $M=Y_i$ for some quasi-simple module
$Y$ of rank $s>1$. It follows that $\mu(Y_s)\geq \mu(H_1)$ and $i>
2s$. Therefore, $Y_{i-1}$ is a GR submodule of $Y_i$ and
$$\mu(Y_{i-1})\leq \mu(X_j)<\mu(M)=\mu(Y_i)<\mu(X_{j+1})$$
by minimality of $|M|$. This implies $\mu(Y_{i-1})=\mu(X_j)$, since
otherwise $|X_j|>|M|>|X_{j+1}|$, which is a contradiction. Observe
that $i-1\geq 2s$ and $j\geq 2r$. Then Lemma \ref{2} implies
$\mu(X_t)=\mu(Y_t)$ for all $t\geq r=s$. This contradicts the
assumption $\mu(X_j)<\mu(M)=\mu(Y_i)<\mu(X_{j+1})$. Therefore, there
are no indecomposable regular modules $M$ satisfying
$\mu(X_j)<\mu(M)<\mu(X_{j+1})$ for any $j\geq 2r$.
Now we assume that $M$ is an indecomposable preinjective module such
that $\mu(X_j)<\mu(M)<\mu(X_{j+1})$. Since $\mu(X_r)\geq \mu(H_1)$, so
$X_j$ is a GR submodule of $X_{j+1}$. It follows that $|X_{j+1}|>|M|>|X_j|$
by Proposition \ref{epi}(1). Let $Y_i$ be a GR submodule of
$M$ for some quasi-simple module $Y$ and $i\geq 1$. Then $Y_i\ncong
X_t$ for any $t>0$ by Proposition \ref{bigprop}(7). Comparing the
lengths, we have $\mu(Y_i)\geq \mu(X_j)$. (Namely, if $\mu(Y_i)<\mu(X_j)<\mu(M)$,
then $|X_j|>|M|$ by Proposition \ref{epi}(1) since $Y_i$ is a GR submodule of $M$.
But on the other hand, $|X_j|<|M|$ by previous discussion.)
Thus Proposition
\ref{bigprop}(7) implies that
$\mu(X_j)<\mu(Y_{i+1})<\mu(M)<\mu(X_{j+1})$. Therefore, we get an
indecomposable regular module $Y_{i+1}$ with GR measure lying
between $\mu(X_j)$ and $\mu(X_{j+1})$, which is a contradiction.
The proof is completed.
\end{proof}
\subsection{}
Let $X$ be a quasi-simple module of rank $r$ such that $\mu(X_r)\geq
\mu(H_1)$. For a given $i\geq 2r$, let
$\mu_{i,1}>\mu_{i,2}>\ldots>\mu_{i,t_i}$ be all different GR
measures of the form $\mu_{i,j}=\mu(X_i)\cup\{a_{i,j}\}$ and
$a_{i,j}\neq |X_{i+1}|$ for any $1\leq j\leq t_i$. Notice that there
are only finitely many such $\mu_{i,j}$ for each given $i$.
\begin{lemm}\label{befdp}
\begin{itemize}
\item[(1)] $a_{i,j}<|X_{i+1}|$ for all $1\leq j\leq t_i$ and $a_{i,j}<a_{i,l}$ if $1\leq j<l<t_i$.
\item[(2)] $\mu_{i,j}>\mu(X_t)$ for all $1\leq j\leq
t_i, t\geq 1$.
\item[(3)] $\mu_{i,j}>\mu_{l,h}$ if $i< l$.
\item[(4)] If $M$ is an indecomposable module such that $\mu(M)=\mu_{i,j}$, then $M\in\mathcal{I}$.
\end{itemize}
\end{lemm}
\begin{proof}
(1) If $a_{i,j}>|X_{i+1}|$, then
$$\mu_{i,j}=\mu(X_{i})\cup\{a_{i,j}\}<\mu(X_i)\cup\{|X_{i+1}|\}=\mu(X_{i+1}).$$
This contradicts $\mu(X_{i+1})$ is a direct successor of $\mu(X_i)$
(Proposition \ref{ds}). Thus $a_{i,j}<|X_{i+1}|$.
(2) follows from (1) and the fact that $X_{2r}\subset
X_{2r+1}\subset\ldots\subset X_t\subset X_{t+1} \subset\ldots$ is a
sequence of GR inclusions.
(3) If $i<l$, then
$$\begin{array}{rcl}
\mu_{l,h} &=& \mu(X_{l})\cup\{a_{l,h}\}\\
&=& \mu(X_{i})\cup\{|X_{i+1}|,\ldots,|X_l|,a_{l,h}\}\\
&<& \mu(X_{i})\cup\{a_{i,j}\}\\
&=& \mu_{i,j}
\end{array}$$
(4) If $M$ is not preinjective, then $M$ is regular, say $M=Y_t$ for
some quasi-simple module $Y$ of rank $s$. Thus $t> 2s$ since
$|M|>|X_i|\geq 2|\delta|$, and $\mu(Y_s)\geq \mu(H_1)$. In particular,
$Y_{t-1}$ is a GR submodule of $Y_t$ and
$\mu(Y_{t-1})=\mu(X_i)<\mu(X_{i+1})<\mu(M)=\mu(Y_t)$. This is a
contradiction since $\mu(Y_t)$ is also a direct successor of
$\mu(Y_{t-1})$.
\end{proof}
\begin{prop}\label{dp} The sequence of GR measures
$$\ldots<\mu_{i+1,2}<\mu_{i+1,1}<
\mu_{i,t_i}<\ldots<\mu_{i,j+1}<\mu_{i,j}<\ldots<\mu_{i,2}<\mu_{i,1}$$
is a sequences of direct predecessors.
\end{prop}
\begin{proof}
Let $M$ be an indecomposable module such that
$$\mu(X_i)\cup\{a_{i,j+1}\}=\mu_{i,j+1}<\mu(M)<\mu_{i,j}=\mu(X_i)\cup\{a_{i,j}\}.$$
Then $\mu(M)=\mu(X_i)\cup\{b_1,b_2,\ldots,b_m\}$ with
$a_{i,j}<b_1\leq a_{i,j+1}$. By the choices of
$\mu_{i,j}$, we have $m\geq 2$ and $b_1=a_{i,j+1}$. This implies
$M$ contains a submodule $N$ with
$\mu(N)=\mu(X_i)\cup\{a_{i,j+1}\}$, which is thus a preinjective
module by above lemma. However, an indecomposable preinjective
module can not be a submodule of any other indecomposable module.
We therefore get a contradiction.
Now let $M$ be an indecomposable module such that
$$\mu(X_i)<\mu(X_i)\cup\{|X_{i+1}|,a_{i+1,1}\}=\mu_{i+1,1}<\mu(M)<\mu_{i,t_i}=\mu(X_i)\cup\{a_{i,t_i}\}.$$
It follows that $\mu(M)=\mu(X_i)\cup\{b_1,b_2,\ldots,b_m\}$. By
definition of $\mu_{i,t_i}$, we have
$b_1=|X_{i+1}|<a_{i+1,1}<|X_{i+2}|$ and $m\geq 2$. From $b_2\leq
a_{i+1,1}$ and the definition of $\mu_{i+1,1}$, we obtain that
$b_2=a_{i+1,1}$ and $m\geq 3$. Therefore, $M$ contains an
indecomposable preinjective module $N$ with GR measure
$\mu(X_i)\cup\{|X_{i+1}|,a_{i+1,1}\}$ as a submodule, which is
impossible.
The proof is completed.
\end{proof}
\begin{remark} We should note that some segments of the sequence of
the GR measures in this proposition may not exist. In this case, we
can still show as in the proof that for example, $\mu_{j,1}$ is a
direct predecessor of $\mu_{i,t_i}$ for some $j\geq i+2$.
\end{remark}
\begin{remark} Assume that these $\mu_{i,j}$ constructed above are not
landing measures (For example, $X$ is a homogeneous simple module
$H_1$. See Section \ref{precen}). Since each GR measure different
from $I^1$ has a direct successor, We may construct direct
successors starting from $\mu_{i,1}$ for a fixed $i$. Let $\mu(M)$
be the direct successor of $\mu_{i,1}$. If $M$ is preinjective, then
$|M|<|\mu_{i,1}|=a_{i,1}$ by Proposition \ref{prepre}. Thus after
taking finitely many direct successors, we obtain a regular
measure (meaning that it is a GR measure of an indecomposable
regular module). Proposition \ref{prepre} tells that all direct
successors starting with this regular measure are still regular
ones. One the other direction, if there are infinitely many
preinjective modules containing some $X_i$, $i\geq 2r$ as GR submodules, then the
sequence $\mu_{i,j}$ is infinite (This does occur in some case. See
Section \ref{precen}). Thus we obtain a sequence of GR measures
indexed by integers $\mathbb{Z}$.
\end{remark}
\subsection{}
We fix a tame quiver $Q$ of type $\widetilde{\mathbb{A}}_n$.
There are always GR measures having no direct predecessors, for
example, $\mu(H_1)$ (Proposition \ref{H1}). We are going to show
that the number of GR measures possessing no direct predecessors is
always finite.
\begin{lemm} Let $X$ be a quasi-simple module of rank $r>1$. Assume
that there is an $i\geq 1$ such that $X_i\in\mathcal{C}$ is a
central module. Then there is an $i_0\geq i$ such that
$\mu(X_{j+1})$ is a direct successor of $\mu(X_j)$ for each $j\geq
i_0$.
\end{lemm}
\begin{proof}By Proposition \ref{ds}, we may assume that $\mu(X_r)<\mu(H_1)$.
Since $X_i$ is a central module, $X_j$ is the unique, up to
isomorphism, GR submodule of $X_{j+1}$ for every $j\geq i$. We first
show that there is a $j_0$ such that there does not exist a regular
module with GR measure $\mu$ satisfying $\mu(X_j)<\mu<\mu(X_{j+1})$
for any $j\geq j_0$.
Let $Y$ be a quasi-simple module module of rank $s$ such that
$\mu(X_j)<\mu(Y_l)<\mu(X_{j+1})$ for some $j\geq i\geq r$ and $l\geq
1$. In this case, $Y_l$ is a GR submodule of $Y_{l+1}$ since $Y_l$
is a central module and thus $\mu(Y_l)>\mu(T)$ for all preprojective module $T$.
Comparing the lengths, we have
$\mu(Y_{l+1})<\mu(X_{j+1})$, and similarly $\mu(Y_h)<\mu(X_{j+1})$
for all $h\geq 1$. Now replace $j$ by some $j'>j$ and repeat the
above consideration. Since there are only finitely many quasi-simple
modules $Z$ such that $\mu(Z_{r_Z})\leq \mu(H_1)$, where $r_Z$ is
the rank of $Z$, we may obtain an index $j_0$ such that a GR measure
$\mu$ of an indecomposable regular module satisfies either
$\mu<\mu(X_{j_0})$ or $\mu>\mu(X_j)$ for all $j\geq 1$.
Fix the above chosen $j_0$. Assume that there is an indecomposable
preinjective module $M$ such that $\mu(X_j)<\mu(M)<\mu(X_{j+1})$ for
some $j\geq j_0$. Then $\mu(M)$ starts with $\mu(X_j)$ and thus
there is an indecomposable submodule $N$ of $M$ in a GR filtration
of $M$ such that $\mu(N)=\mu(X_j)$. Note that $N$ is a regular
module and thus $N=Y_l$ for some $l\geq 1$. If $X_j\cong N$, then
$\mu(M)>\mu(X_j)$ for all $j\geq 0$, a contradiction. Therefore,
$X_j\ncong N$. It follows that
$\mu(X_j)=\mu(N)<\mu(Y_{l+1})<\mu(M)<\mu(X_{j+1})$, which
contradicts the choice of $j_0$. We can finish the proof by taking
$i_0=j_0$.
\end{proof}
\begin{theo}Let $Q$ be a tame quiver of type $\widetilde{\mathbb{A}}_n$, $n\geq 1$. Then
only finitely many GR measures have no direct
predecessors.
\end{theo}
\begin{proof}
We first show that only finitely many GR measures of regular modules have no direct predecessors.
Let $X$ be a quasi-simple module of rank $r>1$. If
$\mu(X_r)\geq\mu(H_1)$, then for every $i>2r$, $\mu(X_i)$ has a
direct predecessor $\mu(X_{i-1})$ (Proposition \ref{ds}). Thus we
may assume that $\mu(X_r)<\mu(H_1)$. If every $X_i$ is a take-off
module, then $\mu(X_i)$ has direct predecessor by definition. If
there is an index $i\geq 1$ such that $X_j$ are central modules for
all $j\geq i$, then there is an index $i_0\geq i$ such that
$\mu(X_j)$ is a direct predecessor of $\mu(X_{j+1})$ for every
$j\geq i_0$. Therefore, there are only finitely many GR measures of
indecomposable regular modules having no direct predecessor.
Now it is sufficient to show that all but
finitely many GR measures of preinjective modules have no direct
predecessors. Let $M$ be an indecomposable preinjective module.
Since there are only finitely many isomorphism classes of
indecomposable preinjective modules with length smaller than
$2|\delta|$, we may assume that $|M|>2|\delta|$. Thus a GR submodule
of $M$ is $X_i$ for some quasi-simple $X$ of rank $r\geq 1$ and some
$i\geq 2r$. Notice that $\mu(X_r)\geq \mu(H_1)$, since, otherwise,
$\mu(X_i)<\mu(H_1)<\mu(M)$ would imply $|H_1|>|M|$, which is impossible.
Without loss of generality,
we may also assume that there are GR measures $\mu$ starting with
$\mu(X_{i})$ and $\mu<\mu(M)$. (Namely, if such a $\mu$ does not
exist, we may replace $M$ by an indecomposable preinjective module
$M'$ with $|M'|>|M|+|\delta|$. Then the GR submodule of $M'$ is
$Y_{i'}$ with $Y\ncong X$. By this way, we may finally find an
integer $d$ such that all indecomposable preinjective modules with
length greater than $d$ contain $Z_l,l\geq 2r_Z$ as GR submodules
for some fixed quasi-simple module $Z$. Thus there are infinitely
many indecomposable preinjective modules with GR measures starting
with $\mu(Z_l),l\geq 2r_Z$.) Then Proposition \ref{dp} ensures the
existence of the direct predecessor of $\mu(M)$.
\end{proof}
\subsection{Tame quivers}
After showing that for a tame quiver of type
$\widetilde{\mathbb{A}}_n$, there are only finitely many GR measures
having no direct predecessors, we realized that the fact is also true for
any tame quiver, i.e. a quiver of type $\widetilde{\mathbb{D}}_n$,
$\widetilde{\mathbb{E}}_6$, $\widetilde{\mathbb{E}}_7$ or
$\widetilde{\mathbb{E}}_8$. We outline the proof of this fact in the following.
\begin{theo}\label{tame}Let $\Lambda$ be a tame quiver.
Then there are only finitely many GR measures having no direct
predecessors.
\end{theo}
The proof of this theorem is almost the same as that for the
$\widetilde{\mathbb{A}}_n$ case. As we have remarked after Proposition
\ref{bigprop} that the statements (2), (4) and (7) in
Proposition \ref{bigprop} hold in general \cite{Ch4}. Using these we can show Lemma \ref{2} all tame quivers.
Proposition \ref{ds} remains true. But the proof
should be changed a little bit because in general, a GR submodule of
a preinjective module is not necessary a regular module. The first
part of the proof of Proposition \ref{ds} is valid in general cases. For the second part, we
have to change as follows:
\begin{proof} Assume that $M$ is an indecomposable preinjective module such that
$\mu(X_i)<\mu(M)<\mu(X_{i+1})$ with $|M|$ minimal. Let $N$ be a GR
submodule of $M$. Comparing the lengths, we have
$\mu(X_i)\leq\mu(N)$. If $N=Y_j$ is regular for some quasi-simple
module $Y$ of rank $s$, then
$\mu(M)>\mu(Y_{j+1})>\mu(Y_j)\geq\mu(X_i)$. This contradicts the
first part of the proof. If $N$ is preinjective, then
$\mu(N)=\mu(X_i)$ by the minimality of $|M|$. Thus a GR filtration
of $N$ contains a regular module $Z_{2t}$ for a quasi-simple $Z$ of
rank $t$. It follows that $\mu(X_{2r})=\mu(Z_{2t})$. Thus by Lemma \ref{2} and Proposition \ref{bigprop}(7)
we have
$\mu(M)>\mu(N)>\mu(Z_{i+1})=\mu(X_{i+1})$, which is a contradiction.
\end{proof}
Lemma \ref{befdp} is true in general. However, Proposition \ref{dp}
should be replaced by the following one:
\begin{prop}\begin{itemize}
\item[(1)]There are only finitely many GR measures lying between
$\mu_{i,j}$ and $\mu_{i,j+1}$.
\item[(2)] There are only finitely many GR measures lying between
$\mu_{i,t_i}$ and $\mu_{i+1,1}$. In particular, $\mu_{i,j}$ has a direct predecessor.
\end{itemize}
\end{prop}
\begin{proof} Assume that $M$ is an indecomposable module such that
$\mu(X_i)\cup\{a_{i,j+1}\}=\mu_{i,j+1}<\mu(M)<\mu_{i,j}=\mu(X_i)\cup\{a_{i,j}\}$.
Then $\mu(M)=\mu(X_i)\cup \{b_1,b_2,\ldots,b_m\}$. By definition of
$\mu_{i,j}$, we have $b_1=a_{i,j+1}$ and $m\geq 2$, In particular,
$M$ has a GR filtration containing an indecomposable module $N$ such
that $\mu(N)=\mu(X_i)\cup\{b_1\}$, which is thus preinjective.
However, there are only finitely many indecomposable modules
containing a given indecomposable preinjective module as a
submodule. It follows that only finitely many GR measures starting
with $\mu(N)=\mu(X_i)\cup\{b_1\}$. Therefore, the number of GR
measures, which lies between $\mu_{i,j+1}$ and $\mu_{i,j}$ is finite
for each $i\geq 2r$.
2) follows similarly. Notice that
the first remark after Proposition \ref{dp} still works for this
case.
\end{proof}
The remaining proof of Theorem \ref{tame} is similar.
\section{preinjective Central modules }\label{precen}
In \cite{R1}, it was proved that all landing modules are
preinjective in the sense of Auslander and Smal\o \,\ \cite{AS}.
There may exist infinitely many preinjective central modules. In
this section, we study the preinjective modules and the
central part. Throughout this section, $Q$ is a fixed tame
quiver of type $\widetilde{\mathbb{A}}_n$.
\subsection{}We first describe the landing modules.
\begin{prop}Let $M$ be an indecomposable preinjective module. Then
either $M\in\mathcal{L}$ or $\mu(M)<\mu(X)$ for some indecomposable
regular module $X$.
\end{prop}
\begin{proof} If $\mu(M)=\mu(X_i)$ for some quasi-simple module $X$, we thus
have $\mu(M)<\mu(X_j)$ for all $j>i$. Thus we may assume that $\mu(M)> \mu(X)$ for all regular modules
$X\in\mathcal{R}$. Let $\mu_1$ be the direct successor of $\mu(M)$
and $\mathcal{A}(\mu_1)$ the collection of indecomposable modules
with GR measure $\mu_1$. It follows that $\mathcal{A}(\mu_1)$
contains only preinjective modules. Let $Y^1\in\mathcal{A}(\mu_1)$
and $X^1\ra Y^1$ be a GR inclusion. Since $X^1\in\mathcal{R}$, we
have $\mu(X^1)<\mu(M)<\mu(Y^1)=\mu_1$. Thus $|M|>|Y^1|$. Let $\mu_2$
be the direct successor of $\mu_1$ and $Y^2\in\mathcal{A}(\mu_2)$.
As above we have $|Y^1|>|Y^2|$. Repeating this procedure, we get a
sequence of indecomposable preinjective modules $M=Y^0,Y^1,
Y^2,\ldots, Y^n,\ldots$ such that $\mu(Y^i)$ is the direct successor
of $\mu(Y^{i-1})$ and $|Y^i|<|Y^{i-1}|$. Because the lengths
decrease, there is some $j<\infty$ such that $\mu(Y^j)$ has no
direct successor. It follows that $\mu(Y^j)=I^1$ and $\mu(M)$ is a
landing measure.
\end{proof}
\begin{coro} Let $M$ be an indecomposable module.
Then $\mu(M)>\mu(X)$ for all regular modules $X$ if and only if $M$
is a landing module.
\end{coro}
\begin{prop} If $M, N$ are landing modules, then $\mu(M)<\mu(N)$ if
and only if $|M|>|N|$.
\end{prop}
\begin{proof} Assume that $\mu(M)<\mu(N)$. Let $X$ be a GR
submodule of $N$. Since $X$ is a regular modules, we have
$\mu(X)<\mu(M)<\mu(N)$ and thus $|M|>|N|$.
\end{proof}
\begin{prop}\label{landing} Assume that there is a stable tube of rank $r>1$. Then
all but finitely many landing modules contain only exceptional regular modules
as GR submodules.
\end{prop}
\begin{proof} Let $M$ be a landing module which is thus
preinjective. Thus the GR submodules of $M$ are all regular modules.
Assume that $M$ contains homogeneous modules $H_i$ as GR submodules.
Let $\mathbb{T}$ be a stable tube of rank $r>1$. Then there exists
a quasi-simple module $X$ on $\mathbb{T}$ such that
$\mu(X_r)\geq\mu(H_1)$ (Proposition \ref{bigprop}(5)). Thus
$\mu(H_i)<\mu(X_{r+1})<\mu(M)$ and therefore, $|X_{r+1}|>|M|$. This
implies $i=1$ and $|M|<2|\delta|$.
\end{proof}
\begin{coro}\label{cen} Assume that there is a stable tube of rank $r>1$.
If $M$ is an indecomposable module containing homogeneous modules $H_i$ as
GR submodules for some $i\geq 2$, then $M$ is a central module.
\end{coro}
\subsection{}
We partition the tame quivers of type
$\widetilde{\mathbb{A}}_n$ into three classes; for each cases we study
the possible preinjective central modules.
{\bf Case 1} Assume that in the quiver $Q$ there is a clockwise path
of arrows $\alpha_1\alpha_2$ and a counter clockwise path
$\beta_1\beta_2$ as follows:
$$\xymatrix{\bullet\ar@{--}[d]\ar[r]^{\alpha_2} & 1 \ar[r]^{\alpha_1} & \bullet\ar@{--}[d]\\
\bullet\ar[r]_{\beta_2} & 2 \ar[r]_{\beta_1} & \bullet} $$
Let $C$ be a string starting with $\alpha_2^{-1}$ and ending with
$\beta_2$. Thus $s(C)=1,e(C)=2$. It is obvious that the string
modules $M(C)$ contains both simple regular modules $S(1)$ and
$S(2)$, which are in different regular components, as submodules.
Thus $M(C)$ is an indecomposable preinjective module. Fix such a
string $C$ such that the length of $C$ is large enough, i.e. $M(C)$
contains homogeneous modules $H_1$ as submodules. The GR submodules
of $M(C)$ has one of the following forms $S(1)_i, S(2)_j, H_t$ for
some $i,j,t\geq 1$. However,
$\mu(S(1)_i)<\mu(H_1),\mu(S(2)_{i})<\mu(H_1)$ for all $i\geq 0$
(Proposition \ref{bigprop}(6)). Thus the GR submodules of $M(C)$ are homogeneous
modules. In particular, there are infinitely many indecomposable
preinjective modules containing only homogeneous modules as GR
submodules. Thus there are infinitely many preinjective central
modules by Corollary \ref{cen}.
\smallskip
As an example, we consider the following quiver
$Q=\widetilde{\mathbb{A}}_{p,q}$, $p+q=n+1$, with precisely one source and one
sink:
$$\xymatrix@R=12pt@C=18pt{
& \bullet\ar[r]^{\alpha_2} & \bullet\ar[r]^{\alpha_3} && \cdots
& \bullet\ar[r]^{\alpha_{p-1}} &\bullet\ar[rd]^{\alpha_p}&\\
1\ar[ru]^{\alpha_1}\ar[rd]_{\beta_1}&&&&&&& n+1\\
& \bullet\ar[r]_{\beta_2} & \bullet\ar[r]_{\beta_3} & &\cdots
& \bullet\ar[r]_{\beta_{q-1}}& \bullet\ar[ru]_{\beta_q} &\\ }$$
There are two stable tubes $\mathbb{T}_X$ and $\mathbb{T}_Y$
consisting of string modules. The stable tube $\mathbb{T}_Y$
contains the string module $Y$ determined by the string
$\beta_q\beta_{q-1}\cdots\beta_2\beta_1$, and simple modules $S$
corresponding to the vertices ${s(\alpha_i)}, 2\leq i\leq p$ as
quasi-simple modules. The rank of $Y$ is $p$. The other tube $\mathbb{T}_X$
contains string module $X$ determined by the string
$\alpha_p\alpha_{p-1}\cdots\alpha_2\alpha_1$ and simple modules $S$
corresponding to the vertices ${s(\beta_i)}, 2\leq i\leq q$. The
ranks of $X$ is $q$. All the other stable tubes contain only band
modules.
We can easily determine the GR measures of these quasi-simple
modules. Notice that any non-simple quasi-simple module ($X$,$Y$ and
$H_1$) contains $S(n+1)$ as the unique simple submodule. Therefore,
each homogeneous simple module $H_1$ has GR measure
$\mu(H_1)=\{1,2,3,\ldots, n,n+1\}$ and the GR measure for $X$ and
$Y$ are $\mu(X)=\{1,2,3,\ldots,p,p+1\}$ and
$\mu(Y)=\{1,2,3,\ldots,q,q+1\}$. It is easily seen that
$X_q\subset X_{q+1}\subset\ldots
\subset X_j\subset\ldots $ is a chain of GR inclusions and thus
$\mu(X_q)=\mu(H_1)$. Similarly, $Y_p\subset Y_{p+1}\subset\ldots
\subset Y_j\subset\ldots $ is a chain of GR inclusions and
$\mu(Y_p)=\mu(H_1)$.
Any non-sincere indecomposable module belongs to the take-off part. This
is true because the GR submodule of $H_1$ is a uniserial module and
has GR measure $\{1,2,3,\ldots,n\}$ and a non-sincere
indecomposable module has length smaller than $|\delta|$. Let $M\in
\mathcal{I}$ be a sincere indecomposable preinjective module and
$T\subset M$ a GR submodule. We claim that $T$ is isomorphic to some $H_i$,
$X_{sq}$ or $Y_{tp}$ for some $i, s,t\geq 1$. First of all the $T\ncong S_i$
for any simple regular module $S$ and any $i\geq 1$, since $\mu(S_i)<\mu(H_1)$ (Proposition \ref{bigprop}(6))
and there is monomorphism $H_1\ra M$ for each homogeneous module $H_1$.
If, for example, $T\cong X_i$ for some $i\geq q$, then there is an epimorphism $X_{i+1}\ra M$ (Proposition \ref{epi}(2)). Thus $|X_i|<|M|<|X_{i+1}|$. However, $|X_{i+1}|-|X_i|=1$ if $i$ is not divided by $q$.
Notice that if $p\geq 2$ and $q\geq 2$, then there are infinitely
many preinjective central modules by above discussion.
\smallskip
{\bf Case 2} $Q=\widetilde{\mathbb{A}}_{p,1}$. Let's keep the
notations in the above example. By Proposition \ref{landing}, we
know that there are infinitely many landing modules containing only
exceptional modules of the form $Y_i$ as GR submodules. Given an
indecomposable preinjective module $M$ and its GR submodule $Y_i, i>
p$. We claim that the GR submodules of $\tau M$ are homogeneous
ones. Namely, if $\tau M$ contains an exceptional regular module $N$
as a GR submodule, then $N\cong Y_j$ for some $j\geq p$. In
particular, both $M$ and $\tau M$ contains $Y$ as a submodule, i.e.
$\Hom(Y,M)\neq 0\neq\Hom(Y,\tau M)$. Therefore, we have
$\Hom(\tau^{-}Y, M)\neq 0\neq \Hom(Y,M)$, which contradicts Lemma
\ref{onemap}. Thus, there are infinite many indecomposable
preinjective modules containing only homogeneous modules as GR
submodules and hence infinitely many preinjective central modules.
\smallskip
{\bf Case 3} $Q\neq \widetilde{\mathbb{A}}_{p,q}$ is of the
following form: all non-trivial clockwise (or counter clockwise)
paths (compositions of arrows) are of length $1$. In this case, all
exceptional quasi-simple modules in one of the exceptional tubes are
of length at least $2$, and the quasi-simple modules on the other
exceptional tube have length at most $2$.
Let $p=\beta_t\ldots\beta_2\beta_1$ be a composition of arrows in
$Q$ with maximal length. Thus there is an arrow $\alpha$ with ending
vertex $e(\alpha)=e(p)$ and $s(\alpha)$ is a source. Let $X=M(p)$ be
the string module, which is thus a quasi-simple module, say with
rank $r$. By the maximality of $p$ and the description of
irreducible maps between string modules, we may easily deduce that
the sequence of irreducible monomorphism $X=X_1\ra X_2\ra \ldots \ra
X_r\ra X_{r+1}\ra \ldots $ is namely a sequence of GR inclusions.
Therefore
$$\mu(X_{r+1})=\{1,2,3,\ldots,t+1=|X_1|, |X_2|, |X_3|,\ldots, |X_r|,
|X_{r+1}|\}$$ with $|X_i|-|X_{i-1}|\geq 2$ for $2\leq i\leq r$ and
$|X_{r+1}|=|X_r|+(t+1)$.
Let $Y$ be the string module determined by the arrow $\alpha$. It is
also a quasi-simple module, say with rank $s$. It is clear that $X$ and $Y$ are
not in the same regular component. By the
description of irreducible monomorphisms, we obtain that $|Y_j|=j+1$
for $j\leq t$ and $|Y_{t+1}|=t+3$. Thus
$$\mu(Y_{s+1})\geq \{1,2,\ldots, t+1,t+3, |Y_{t+2}|,\ldots,
|Y_s|,|Y_{s+1}|\}$$ with $|Y_i|-|Y_{i-1}|\leq 2$ for $i\leq s$ and
$|Y_{s+1}|=|Y_s|+2$.
This proves the following lemma.
\begin{lemm}
We keep the notations as above. If $t>1$, i.e. $Q$ is not equipped
with a sink-source orientation, then $\mu(Y_s) \geq \mu(X_r) $ and
$\mu(Y_j)>\mu(X_i)$ for $i\geq 1$ and $j> s$.
\end{lemm}
\subsection{}
Now we characterize the tame quivers $Q$ of type
$\widetilde{\mathbb{A}}_n$ such that no indecomposable preinjective
modules are central modules. We also show that there are always
infinitely many preinjective central modules if any.
\begin{theo}Let $Q$ be a tame quiver of type $\widetilde{\mathbb{A}}_n$.
Then $\mathcal{I}\cap\mathcal{C}=\emptyset$ if and only if $Q$ is equipped with
a sink-source orientation.
\end{theo}
\begin{proof}
If $n=1$, then $Q$ is obvious of a sink-source orientation, and the central part
contains precisely the regular modules (see, for example, \cite{R1}). Now we assume
$n\geq 2$. Thus there exists an exceptional regular component. If
$\mathcal{I}\cap\mathcal{C}=\emptyset$, a sincere indecomposable
preinjective is always a landing module. Then the proof of
Proposition \ref{landing} implies that there is no indecomposable
preinjective modules $M$ containing a homogeneous module $H_i,i\geq
2$ as GR submodules. Therefore, by above partition of the tame quiver of type $\widetilde{\mathbb{A}}_n$, we
need only to consider Case 3 and show that
$\mathcal{I}\cap\mathcal{C}=\emptyset$ implies $t=1$ (let's keep the
notations in case 3). Assume for a contradiction that $t>1$. Let $S$
be the simple module corresponding to $s(\beta_t)$. Thus $S$ is a
quasi-simple of rank $s$ and $ \tau S\cong Y$. Let $I$ be the
(indecomposable) injective cover of $S$. It is obvious that
$\Hom(X,I)\neq 0$. Consider the indecomposable preinjective modules
$\tau^{um}I$, where $u$ is a positive integer and $m=[r,s]$ is the
lowest common multiple of $r$ and $s$. Since
$\Hom(S,\tau^{um}I)\neq 0\neq \Hom(X,\tau^{um}I)$, a GR submodule of
$\tau^{um}I$ is either $S_i$ or $X_j$. Notice that
$\mu(H_1)>\mu(S_i)$ for all $i\geq 0$ since $S$ is simple.
Therefore, for $u$ large enough, the unique GR submodule of
$\tau^{um}I$ is $X_j$ for some $j\geq 1$ because no indecomposable
preinjective modules containing $H_i$ as GR submodules for $i\geq
2$. In particular there are infinitely many preinjective modules
containing GR submodules of the form $X_j,j\geq 1$. Thus we may
select a GR inclusion $X_j\subset M$ with $M\in\mathcal{I}$ such $|X_j|>|Y_{s+1}|$. Because
$\mu(X_j)<\mu(Y_{s+1})<\mu(M)$, we have $|Y_{s+1}|>|M|$. This
contradicts $|X_j|>|Y_{s+1}|$. Thus we have $t=1$ and $Q$ is equipped with a
sink-source orientation.
Conversely, if $Q$ is with a
sink-source orientation, we may see directly that
$\mathcal{I}\cap\mathcal{C}=\emptyset$ (for details, see \cite[Example 3]{Ch3}).
\end{proof}
\begin{theo}Let $Q$ be a tame quiver of type $\widetilde{\mathbb{A}}_n$. Then $\mathcal{I}\cap\mathcal{C}\neq
\emptyset$ if and only if $|\mathcal{I}\cap\mathcal{C}|=\infty$.
\end{theo}
\begin{proof} We have seen in Corollary \ref{cen} that an indecomposable module containing homogeneous modules
$H_i, i\geq 2$ as GR submodules is a central module. Thus we may
assume that there are only finitely many indecomposable preinjective
module containing homogenous modules as GR submodules. Thus, we need
only consider Case 3. Let's keep the notations there. Then
$\mathcal{I}\cap\mathcal{C}\neq \emptyset$ implies that $Q$ is not
with a sink-source orientation. In particular, the length $t$ of the
longest path of arrows $\beta_t\cdots\beta_1$ is greater than $1$.
Therefore, $\mu(Y_j)>\mu(X_i)$ for all $i\geq 1, j>s$. Again let
$m=[r,s]$. By assumption, the GR submodules of $\tau^{um}I$ are of
the form $X_i$ for almost all $u\geq 1$. To avoid a contradiction as
in the proof of above theorem, $\mu(\tau^{um}I)$ is smaller
than $\mu(Y_{s+1})$ for $u$ large enough and thus almost all
$\tau^{um}I$ are central modules.
\end{proof}
\bigskip
{\bf Acknowledgments.} The author is grateful to the referees for valuable
comments and helpful suggestions, which make the article more
readable. He also wants to thank Jan Schr\"oer for many useful discussions. | {"config": "arxiv", "file": "0909.5594.tex"} |
\begin{document}
\begin{abstract}
We investigate the group $G\pv H$ obtained by gluing together two groups $G$ and $H$ at the neutral element. This construction curiously shares some properties with the free product but others with the direct product.
Our results address among others Property~(T), \cat0 cubical complexes, local embeddability, amenable actions, and the algebraic structure of $G\pv H$.
\end{abstract}
\maketitle
\section{Introduction}
Given two groups $G$ and $H$, we can define an unorthodox sort of product group $G\pv H$ as follows. Take the disjoint union of $G$ and $H$ as sets and glue them together by identifying their neutral element $e$. On the resulting set, let $G$ act regularly on itself by left multiplication, and trivially elsewhere. Proceed similarly with $H$. Then $G\pv H$ is defined as the permutation group generated by these copies of $G$ and $H$.
This construction has a few quirks; for instance, if $G$ and $H$ are both finite, then $G\pv H$ is often a simple group (Theorem~\ref{thm:finite} below). Thus we shall focus mostly on infinite groups, where the following observation restores some credit to the concept of $G\pv H$ as a ``product'' of its subgroups $G, H$.
\begin{prop}
For infinite groups $G$ and $H$, there is a canonical epimorphism $G\pv H\twoheadrightarrow G\times H$ which is compatible with the inclusions of $G,H$ into $G\pv H$ and into $G\times H$.
Thus there is a canonical identification $(G\pv H)/[G, H] \cong G\times H$.
\end{prop}
In other words, for infinite groups, $G\pv H$ sits between the free product and the direct product with canonical epimorphisms
$$G * H \lraa G\pv H\lraa G\times H.$$
\itshape The theme of this article is that $G\pv H$ is similar to $G * H$ in some respects, but closer to $G\times H$ in others. \upshape Taken together, these antagonistic tendencies show that $G\pv H$ is a simple device for constructing unusual groups.
\medskip
A first elementary illustration of this duplexity is seen when comparing individual elements $g\in G$ and $h\in H$, viewed in $G\pv H$. In the free product, $g$ and $h$ would freely generate a free group as soon as they have infinite order. For $G\pv H$, one checks the same as long as inverses are not allowed:
\begin{prop}\label{prop:pong}
If $g\in G$ and $h\in H$ have infinite order, then they freely generate a free semigroup in $G\pv H$.
\end{prop}
In the direct product, much to the contrary, $g$ and $h$ commute. A simple computation shows that in $G\pv H$ the commutator $[g,h]$ is still trivial up to $3$-torsion:
\begin{prop}\label{prop:3tor}
For any $g\in G$ and $h\in H$, we have $[g,h]^3=e$ in $G\pv H$.
\end{prop}
For our next illustration, consider Kazhdan's property~(T). Recall that ${G\times H}$ has property~(T) if and only if both $G$ and $H$ do~\cite[1.9]{Harpe-Valette}. Contrariwise, $G*H$ never has property~(T) when $G$ and $H$ are non-trivial~\cite[6.a]{Harpe-Valette}.
\begin{thm}\label{thm:T}
Let $G$ and $H$ be infinite groups. Then $G\pv H$ does not have Kazhdan's property~(T).
\end{thm}
Thus, from the perspective of property~(T), it seems that $G\pv H$ is more similar to $G*H$ than to $G\times H$. A closer look, however, could support the opposite stance. Indeed, the proof that $G*H$ fails property~(T) comes from Bass--Serre theory. Namely, property~(T) implies the weaker property~(FA) of Serre, which is incompatible with free products. Here again, $G\times H$ has property~(FA) if and only if both $G$ and $H$ do. Now however $G\pv H$ shares this trait:
\begin{thm}
Let $G$ and $H$ be any groups. Then $G\pv H$ has Serre's property~(FA) if and only if both $G$ and $H$ do.
\end{thm}
From this it is clear that, contrary to the free product case, the obstruction to property~(T) recorded in Theorem~\ref{thm:T} does not come from an action on a tree: for instance, when $G$ and $H$ are infinite Kazhdan groups, $G\pv H$ fails~(T) but retains~(FA). The obstruction does, however, come from an infinite-dimensional generalisation of a tree:
\begin{thm}
Let $G$ and $H$ be infinite groups. Then $G\pv H$ acts by automorphisms on a \cat0 cubical complex $V$ without bounded orbits.
Moreover, each of $G$ and $H$ have fixed vertices in $V$, and there is a unique choice of such vertices that are adjacent.
\end{thm}
This complex can thus be seen as a (weak) replacement of the Bass--Serre tree for $G*H$.
\bigskip
Before going any further, we should lift some of the mystery around the structure of $G\pv H$. Since we defined $G\pv H$ as a permutation group generated by $G$ and $H$, we shall keep our notation straight by writing $\ul G$ and $\ul H$ for the sets with identified neutral elements; thus $\ul G \cap \ul H = \{\ul e\}$. Let further $\Altf$ denote the group of \emph{even} finitely supported permutations of any given set.
\begin{prop}\label{prop:E}
If $G$ and $H$ are non-trivial, then $G\pv H$ contains $\Altf(\ul G \cup \ul H)$.
Moreover, if $G$ and $H$ are infinite, then $\Altf(\ul G \cup \ul H)$ coincides with the kernel of the canonical epimorphism $G\pv H\twoheadrightarrow G\times H$.
\end{prop}
We should not conclude that all the mystery dissolves in view of the extension
$$1 \lra \Altf(\ul G \cup \ul H) \lra G\pv H \lra G\times H \lra 1.\leqno{\text{(E)}}$$
For instance, a general construction associates to any group $L$ its \textbf{lampshuffler} group $\Altf(L) \rtimes L$; however, $G\pv H$ cannot be described simply in terms of $G\times H$ and a lampshuffler. Beside the fact that $G\pv H$ is generated by just $G$ and $H$, the extension~(E) is more complicated than a semidirect product, as our next result shows. Recall that every infinite finitely generated group has either one, two or infinitely many ends and that the one-ended case is in a sense generic since the two others have strong structural restrictions by Stallings' theorem~\cite{Stallings68, Stallings_book}.
\begin{thm}
Let $G,H$ be one-ended finitely generated groups.
Then the canonical epimorphism $G\pv H\twoheadrightarrow G\times H$ does not split.
\end{thm}
This contrasts with the fact that, by construction, each of the factors $G$ and $H$ lifts. It was pointed out to us by Yves Cornulier that the above statement can also be deduced from non-realisability results for near actions, specifically Theorem~7.C.1 in~\cite{Cornulier_near_arx} (see Proposition~2.6 in~\cite{Cornulier_real}).
\medskip
An example of an issue that is immediately settled by the extension~(E) is the \emph{amenability} of $G\pv H$: this group is amenable if and only if both $G$ and $H$ are so. This is exactly like for $G\times H$, whereas the free product of infinite groups is never amenable.
But as soon as we refine the amenability question, $G\pv H$ swings back very close to $G*H$. Recall that a subgroup $G<\Pi$ is \textbf{co-amenable} in $\Pi$ if there is a $\Pi$-invariant mean on $\Pi/G$. Thus $G$ is co-amenable in $G\times H$ exactly when $H$ is amenable, but $G$ is only co-amenable in $G*H$ when $G*H$ itself is amenable or $H$ is trivial. The exact same behaviour is displayed by $G\pv H$:
\begin{thm}\label{thm:co-amen}
Given any two groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item $G$ is a co-amenable subgroup of $G\pv H$.
\item $G\pv H$ is amenable or $H$ is trivial.
\end{enumerate}
\end{thm}
We can also consider more general amenable actions of non-amenable groups. Van Douwen has initiated the study of the class of groups admitting a faithful transitive amenable action by showing that free groups belong to this class~\cite{vanDouwen}. Many more examples have been discovered; turning back to the free product $G*H$ of infinite groups, it always belongs to this class when \emph{at least one} of $G$ or $H$ does~\cite{Glasner-Monod}. In contrast, once again $G\times H$ belongs to that class if and only if both $G$ and $H$ do.
In the case of $G\pv H$, we have no definitive answer unless we specialise to \emph{doubly} transitive actions, in which case the next two propositions display once more the two-fold tendencies of this group.
\begin{prop}\label{prop:amen}
Given two infinite groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item At least one of $G$ or $H$ is amenable.
\item $G\pv H$ admits a faithful doubly transitive amenable action.
\end{enumerate}
\end{prop}
A stronger form of amenability for actions is \textbf{hereditary} amenability, where every orbit of every subgroup is required to be amenable. This can be further strengthened to \textbf{extensive} amenability as defined in~\cite{JMMBS18}.
\begin{prop}\label{prop:extamen}
Given two infinite groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item Both $G$ and $H$ are amenable.
\item $G\pv H$ admits a faithful doubly transitive extensively amenable action.
\item $G\pv H$ admits a faithful doubly transitive hereditarily amenable action.
\end{enumerate}
\end{prop}
Finally, we turn to an approximation property for $G\pv H$. Following Malcev~\cite[\S7.2]{Malcev_book}, a group is \textbf{locally embeddable into finite groups}, or \textbf{LEF}, if every finite subset of the group can be realised in a finite group with the same multiplication map (where defined). In particular, residually finite groups are LEF. More trivially, so are locally finite groups. This notion was further studied notably by St\"{e}pin~\cite{Stepin83, Stepin84} and Vershik--Gordon~\cite{Vershik-Gordon}; we refer to~\cite{Vershik-Gordon} and to~\cite[Chapter~7]{Ceccherini-Silberstein-Coornaert} for more background. A more general approximation property is \emph{soficity}, as introduced by Gromov~\cite{Gromov_ax} and Weiss~\cite{Weiss00}.
It is easy to see that direct products preserve LEF, and it is known that free products do so too, see Corollary~1.6 in~\cite{Berlai16}. We obtain the following weaker form of permanence.
\begin{thm}\label{thm:LEF}
If $G$ and $H$ are two residually finite groups, then $G \pv H$ is locally embeddable into finite groups. In particular, it is sofic.
\end{thm}
This turns out to imply a behaviour contrasting with both free and direct products:
\begin{cor}\label{cor:rf}
If $G$ and $H$ are residually finite and infinite, then $G \pv H$ is not finitely presented.
\end{cor}
We close this introduction with a comment on functoriality. The definition of $G\pv H$ seems rather natural, informally, or at least it is an obvious construct to consider from the viewpoint of pointed permutation groups. It is not, however, natural in the mathematical sense for the category of groups. Weaker statements hold, for instance naturality for monomorphisms of infinite groups. Note also that this ``product'' satisfies an evident commutativity, but fails associativity. We refer to Section~\ref{sec:nat} for all these observations, and to Section~\ref{sec:gen} for generalisations of the construction.
Finally, Section~\ref{sec:q} proposes a few questions.
\subsection*{Acknowledgements}
We are very grateful to Yves Cornulier for his comments on an earlier version of this note; he brought to our attention several references that we were not aware of. Likewise, we thank Pierre de la Harpe warmly for his comments and references.
\setcounter{tocdepth}{1}
\tableofcontents
\section{First properties}
In the entire text, we keep the following conventions. Given two groups $G$ and $H$, we write $\ul G$ for a copy of $G$ viewed as a $G$-set under the left multiplication action (i.e.\ $g \ul g' = \ul{g g'}$), and similarly for $\ul H$. It is part of the definition that the sets $\ul G$ and $\ul H$ are disjoint except for $\ul e$. That is, we write $e$ for the neutral element of any group and this causes no ambiguity since the neutral elements of $G$ and of $H$ are both mapped to a single point $\ul e$ in $\ul G \cup \ul H$. As is customary with products, we emphasise that we consider two ``copies'' of $G$ when writing $G\pv G$.
We identify $G$ with the group of permutations of $\ul G \cup \ul H$ that fixes $\ul H \smallsetminus \{\ul e\}$ pointwise and acts as $G$ on $\ul G$; likewise for $H$. Thus $G$ and $H$ are subgroups of $G\pv H$, which is by definition the group generated by these two permutation groups of $\ul G \cup \ul H$.
We can practise these definitions by showing that $g\in G$ and $h\in H$ freely generate a free semigroup in $G\pv H$ as soon as they are both of infinite order:
\begin{proof}[Proof of Proposition~\ref{prop:pong}]
The argument is a \emph{pong lemma}, the pong ball being $\ul e$. Let $g\in G$ and $h\in H$ be elements of infinite order. Given two ``non-negative words'' $W, W'$ in $g$ and $h$, i.e.\ elements of the free monoid on $\{g, h\}$, we denote by $w, w'$ their evaluations in $G\pv H$. Suppose for a contradiction that $W\neq W'$ but that $w=w'$.
There is no loss of generality in taking a pair $W, W'$ minimising the sum of the lengths of these two words. Since the words cannot both be empty, we can assume that $W$ has a rightmost letter and there is again no loss of generality in supposing that this letter is $g$. It follows that $w \ul e$ lies in $\ul G \smallsetminus\{\ul e\}$ because every (non-empty) right prefix of $W$ will map $\ul e$ to some $\ul{g^n}$ with $n>0$. Therefore, $w' \ul e$ also lies in $\ul G \smallsetminus\{\ul e\}$ and in particular $W'$ is also non-empty. If the rightmost letter of $W'$ were $h$, the same argument would show that $w' \ul e$ lies in $\ul H \smallsetminus\{\ul e\}$. Thus, $W'$ also admits $g$ as its rightmost letter; this contradicts the minimality of the pair $W, W'$.
\end{proof}
Given three distinct elements $x,y,z$ of a set, we recall that $(x;y;z)$ denotes the permutation given by the $3$-cycle
$$x \mapsto y \mapsto z \mapsto x$$
(and fixing the remainder of the set). Our convention for commutators is $[g,h]= g h g\inv h\inv$.
The following computation implies in particular already Proposition~\ref{prop:3tor}.
\begin{prop}\label{prop:tricycle}
If $g\in G$ and $h\in H$ are both non-trivial, then $[g, h] = \tricycle{e}{g}{h}$.
\end{prop}
\begin{proof}
Let $x \in G \smallsetminus \{e, g\}$. Since $H$ acts trivially on $\ul G \smallsetminus \{\ul e\}$ and $g\inv \ul x \neq \ul e$, we have
\begin{align*}
[g, h] \ul x &= g h g\inv h\inv \ul x = g h g\inv \ul x \\
&= g h \ul{g\inv x} = g \ul{g\inv x} \\
&= \ul x.
\end{align*}
A similar computation shows that $[g, h] \ul x = \ul x$ for any $x \in H \smallsetminus \{e, h\}$. Lastly, on the subset $\{\ul e, \ul g, \ul h\}$, the permutation $[g, h]$ acts as a $3$-cycle:
\begin{align*}
[g, h] \ul e &= ghg\inv \ul{h\inv} = g h \ul{h\inv} = \ul g, \\
[g, h] \ul g &= ghg\inv \ul{g} = g h \ul{e} = \ul h, \\
[g, h] \ul h &= ghg\inv \ul{e} = g h \ul{g\inv} = \ul e.
\end{align*}
\end{proof}
Note that the subgroup $[G, H]$ of $G \pv H$ is normal since $G \pv H$ is generated by $G$ and $H$. The above commutator computation elucidates the subgroup $[G, H]$ and establishes the first part of Proposition~\ref{prop:E}:
\begin{prop}\label{prop:containsaltf}
Suppose that $G$ and $H$ are non-trivial groups.
Then $[G, H] = \Altf (\ul G \cup \ul H)$ holds in $G \pv H$.
In particular, $G \pv H$ contains $\Altf (\ul G \cup \ul H)$ as a normal subgroup. Moreover, this subgroup has trivial centraliser in $G \pv H$ unless $\abs{G} = \abs{H} = 2$.
\end{prop}
\begin{proof}
It is well-known that the group $\Altf(\ul G \cup \ul H)$ is generated by all $3$-cycles. Note on the other hand that the action of $G \pv H$ on $\ul G \cup \ul H$ is transitive by construction. Therefore, any $3$-cycle can be conjugated by an element of $G \pv H$ in such a way that it is of the form $(\ul e; x; y)$. Since $[G, H]$ it normal, we can restrict our attention to such $3$-cycles and show that they are indeed in $[G, H]$. This already follows from Proposition~\ref{prop:tricycle} when $x$ and $y$ are not both simultaneously in $\ul G$ or in $\ul H$. By symmetry between $G$ and $H$, it therefore only remains to show that $[G, H]$ contains every $3$-cycle $\tricycle{e}{g}{g'}$, where $g$ and $g'$ are distinct non-trivial elements of $G$.
To this end, consider any non-trivial element $h\in H$. Then the element $g\inv h$ of $G \pv H$ maps the triple $(\ul e,\ul g,\ul g')$ to $(\ul h, \ul e, \ul{g\inv g'})$. Therefore it conjugates the $3$-cycle $\tricycle{e}{g}{g'}$ to
$$\tricycle{h}{e}{g\inv g'} = \tricycle{e}{g\inv g'}{h}$$
which we already know to be in $[G, H]$. Thus indeed $[G, H] = \Altf (\ul G \cup \ul H)$.
Finally, since the action of $\Altf (\ul G \cup \ul H)$ on $\ul G \cup \ul H$ is $2$-transitive when $\ul G \cup \ul H$ has at least $4$ elements, its centraliser is trivial unless $\abs{G} = \abs{H} = 2$ (in which case $\Altf (\ul G \cup \ul H)$ is abelian).
\end{proof}
At this point we can completely describe the case of two finite groups.
\begin{thm}\label{thm:finite}
Let $G$ and $H$ be two non-trivial finite groups. Then
\begin{itemize}
\item $G \pv H = \Sym (\ul G \cup \ul H)$ if and only if $G$ or $H$ has a nontrivial \emph{cyclic} $2$-Sylow;
\item $G \pv H = \Alt (\ul G \cup \ul H)$ otherwise.
\end{itemize}
\end{thm}
In particular, $G \pv H$ never surjects onto $G$ nor $H$ when the latter are finite groups of order at least $3$.
As was pointed out to us by P.~de la Harpe, the results of~\cite{Kang04} can be seen as pertaining to random walks on $G \pv H$ with $G$ and $H$ finite.
\begin{proof}[Proof of Theorem~\ref{thm:finite}]
We know from Proposition~\ref{prop:containsaltf} that $G \pv H$ contains $\Altf( \ul G \cup \ul H)$. Since the latter is a proper maximal subgroup of the symmetric group, everything amounts to understanding when $G$ and $H$ both belong to $\Alt(\ul G \cup \ul H)$, or contrariwise when a finite group contains some element whose associated translation is an odd permutation.
If $g \in G$ has order $k \in \NN$, then the translation by $g$ on $\ul {G}$ has $\abs{G} / k$ orbits of length $k$. Therefore its sign as a permutation of $\ul G$ is
\begin{equation*}
(-1)^{\frac{\abs{G}}{k}(k - 1)},
\end{equation*}
and this is also its sign as permutation of $\ul G \cup \ul H$. The above exponent is always even unless $k$ is even but $\abs{G} / k$ is not, i.e.\ unless there exists an element whose order is even and has the same $2$-valuation as $\abs{G}$. This can only happen if $G$ has a nontrivial $2$-Sylow subgroup which is cyclic.
\end{proof}
Since our definition of $G\pv H$ presents it as a permutation group of the set $\ul G \cup \ul H$; we record the following permutational consequence of Proposition~\ref{prop:containsaltf}.
\begin{cor}\label{cor:high}
Let $G$ and $H$ be non-trivial groups and suppose that at least one is infinite.
Then the action of $G\pv H$ on $\ul G \cup \ul H$ is highly transitive. Moreover, any faithful $2$-transitive $G\pv H$-set is isomorphic, as a $G\pv H$-set, to $\ul G \cup \ul H$.
\end{cor}
We recall here that an action is called \textbf{$n$-transitive} if the induced action on the set of $n$-tuples of distinct points is transitive, and \textbf{highly transitive} if it is $n$-transitive for every $n\in \NN$. The latter is equivalent to the density of the representation to the full permutation group endowed with the usual pointwise topology.
\begin{proof}[Proof of Corollary~\ref{cor:high}]
The high transitivity is due to the fact that $G\pv H$ contains $\Altf( \ul G \cup \ul H)$, which is already highly transitive.
The uniqueness up to isomorphism is a general fact for permutation groups containing $\Altf( \ul G \cup \ul H)$ as a normal subgroup with trivial centraliser (which was recorded in Proposition~\ref{prop:containsaltf}). This general fact is established in Proposition~2.4 of~\cite{LeBoudec-MatteBon_high_arx}.
\end{proof}
\section{The canonical epimorphism and the monolith}
In view of Theorem~\ref{thm:finite}, the main focus of this text is on infinite groups.
\begin{prop}\label{prop:epi}
Let $G$ and $H$ be any groups. If $G$ is infinite, then there is a canonical epimorphism $\pi_G\colon G\pv H \to G$ which is a left inverse for the inclusion $G\to G\pv H$.
More precisely, $\pi_G$ is given by a canonical identification $(G\pv H)/([G,H]H) \cong G$.
\end{prop}
This proposition can of course be applied to both factors when both are infinite. Then $\pi=(\pi_G, \pi_H)$ provides the canonical epimorphism discussed in the introduction, as follows.
\begin{cor}\label{cor:epi}
Let $G$ and $H$ be infinite groups. There is a canonical epimorphism $\pi\colon G \pv H \twoheadrightarrow G \times H$ such that $\pi(g)=(g, e)$ and $\pi(h) = (e,h)$ for all $g\in G$ and $h\in H$. This epimorphism is given by a canonical identification $(G\pv H)/[G, H] \cong G\times H$. \qed
\end{cor}
\begin{proof}[Proof of Proposition~\ref{prop:epi}]
Consider the support $\Supp(\sigma)$ in $\ul G \cup \ul H$ of an element $\sigma$ of the group $[G,H]H$. Since $H$ acts trivially on $\ul G\smallsetminus \{\ul e\}$ and since every element of $[G,H]$ is finitely supported by Proposition~\ref{prop:tricycle}, it follows that $\Supp(\sigma)\cap \ul G$ is finite.
Since on the other hand $\ul G$ is infinite, any two representatives of a coset $\tau [G,H]H$ (where $\tau\in G\pv H$) coincide on a cofinite subset of $\ul G$, and thus coincide there with the multiplication by an element $g\in G$. This defines a map $\tau\mapsto g$ which is a well-defined homomorphism and is the identity on $G<G\pv H$.
\end{proof}
In conclusion, if we combine Proposition~\ref{prop:containsaltf} with Corollary~\ref{cor:epi} and with the fact that $\Altf (\ul{G} \cup \ul{H})$ is a simple group, we obtain the extension~(E) of the introduction (the second part of Proposition~\ref{prop:E}) together with an additional specification:
\begin{prop}\label{prop:monolith}
If $G$ and $H$ are infinite, then the kernel of the canonical epimorphism $G \pv H \twoheadrightarrow G \times H$ is $\Altf (\ul{G} \cup \ul{H})$. Moreover, any nontrivial normal subgroup of $G \pv H$ contains this kernel.
\end{prop}
The latter statement means that $G\pv H$ is a \textbf{monolithic} group, with monolith $\Altf (\ul{G} \cup \ul{H})=[G,H]$. Thus any proper quotient of $G \pv H$ is a quotient of $G \times H$.
\begin{proof}[Proof of Proposition~\ref{prop:monolith}]
It only remains to prove that every non-trivial normal subgroup $N$ of $G\pv H$ contains $\Altf (\ul{G} \cup \ul{H})$. If not, the simplicity of $\Altf (\ul{G} \cup \ul{H})=[G,H]$ implies that it meets $N$ trivially, which entails that these two groups commute. As noted in Proposition~\ref{prop:containsaltf}, this implies that $N$ is trivial.
\end{proof}
\begin{rem}\label{rem:resfin}
The existence of this monolith shows in particular that ${G \pv H}$ is \emph{never} residually finite, and in particular never finitely generated linear ($G$ and $H$ infinite). This stands in contrast to both free and direct products, which preserve residual finiteness and linearity (in equal characteristic). This is clear for direct products; for free products, the former is a theorem of Gruenberg~\cite{Gruenberg57} and the latter of Nisnewitsch~\cite{Nisnewitsch}, see also~\cite{Wehrfritz_free}.
\end{rem}
In conclusion of this section, we have indeed placed $G \pv H$ inbetween the free and the direct product when both $G$ and $H$ are infinite:
$$G * H \lraa G\pv H\lraa G\times H.$$
Moreover, we can write $G\pv H$ as iterated semidirect products
$$G\pv H \cong ([G,H]\rtimes H) \rtimes G \cong ([G,H]\rtimes G) \rtimes H,$$
but we shall prove next that $G \pv H \twoheadrightarrow G \times H$ itself usually does not split.
\section{The extension usually does not split}
\begin{thm}\label{thm:end}
Let $G,H$ be one-ended finitely generated groups.
Then the canonical epimorphism $G\pv H \twoheadrightarrow G\times H$ does not split.
\end{thm}
\noindent
We recall that one-ended groups are infinite, so that the above projection is indeed defined in view of Corollary~\ref{cor:epi}.
The proof begins with an argument that we borrow from~\cite{Cornulier_near_arx}; after that, we let $G$ and $H$ compete for the insufficient space in $\ul G \cup \ul H$. As mentioned in the introduction, Theorem~\ref{thm:end} can alternatively be deduced from Theorem~7.C.1 in~\cite{Cornulier_near_arx}.
\begin{proof}[Proof of Theorem~\ref{thm:end}]
We suppose for a contradiction that there is a lifting $G\times H\to G\pv H$ and we denote by $\tilde g, \tilde h\in G\pv H$ the images of $g\in G$ and $h\in H$ under this lifting; in particular $\tilde g$ commutes with $\tilde h$.
Consider the Cayley graph of $G$ associated to some finite symmetric generating set $S\se G$. For definiteness, let us choose the left Cayley unoriented simple graph, that is, the edges are all sets $\{g, sg\}$ with $g\in G$, $s\in S$ and $s\neq e$. Let further $\Gamma$ be the graph obtained by deleting every edge $\{g, sg\}$ for which $\tilde s \ul g \neq s \ul g$ (recalling that the right hand side is simply $\ul{sg}$).
In view of the definition of the projection $G\pv H \to G\times H$, we see that for any given $s$, only finitely many edges $\{g, sg\}$ are removed. Since $S$ is finite, we have only removed finitely many edges in the definition of $\Gamma$. By definition of one-endedness, $\Gamma$ has a connected component with finite complement.
Consider the map $\chi\colon G \to \ul G \cup \ul H$ defined by $\chi(g) = \tilde g\inv \ul g$. Then $\chi$ is constant on the connected components of $\Gamma$. Therefore, there is $x\in \ul G \cup \ul H$ such that $\chi(g) = x$ holds outside a finite set of elements $g\in G$. In other words, the set
$$A = \left\{ g\in G : \tilde g x \neq \ul g \right\}$$
is finite. We now consider the $\wt G$-orbit of $x$ in $\ul G \cup \ul H$ and claim that this orbit is regular, i.e.\ with trivial stabilisers. Indeed, suppose $\tilde k x = x$ for some $k\in G$. Since $G$ is infinite, we can choose $g \notin A\cup A k\inv$. Then $\ul g = \tilde g x = \tilde g \tilde k x =\wt{g k} x = \ul{gk}$ and hence $k=e$, as claimed.
It follows that this orbit decomposes into disjoint sets as
$$\wt G x = \wt A x \sqcup (\ul G\smallsetminus \ul A).$$
In conclusion, $\wt A x$ contains exactly $|A|$ elements and lies in $\ul A \cup \ul H$.
We now apply the same arguments with $G$ and $H$ interchanged, providing $y\in \ul G \cup \ul H$ and a finite set $B\se H$ with all the corresponding statements.
We further record that since $G$ is finitely generated, there is a finite set $V\se H$ such that $\wt G$ acts trivially on $\ul H\smallsetminus \ul V$. Likewise, $\wt H$ acts trivially on $\ul G\smallsetminus \ul U$ for some finite set $U\se G$.
We claim that the orbits $\wt G x$ and $\wt H y$ are disjoint. Indeed, suppose for a contradiction that $z$ belongs to both. Since $G$ is infinite, we can choose $g\in G$ such that $\tilde g z$ is in $\ul G\smallsetminus (\ul U\cup \{\ul e\})$. Likewise, we can choose $h\in H$ with $\tilde h z$ in $\ul H\smallsetminus (\ul V\cup \{\ul e\})$. Then $\wt{hg}z = \tilde g z \in \ul G\smallsetminus \{\ul e\}$ but this element is also $\wt{gh}z = \tilde h z \in \ul H\smallsetminus \{\ul e\}$, a contradiction confirming the claim.
At this point it follows that $\wt A x$ lies in $\ul A \cup \ul B$, and so does $\wt B y$. The sets $\wt A x$ and $\wt B y$ are disjoint and contain $|A|$, respectively $|B|$, elements. This forces the union $\ul A \cup \ul B$ to be disjoint as well and to coincide with $\wt A x \cup \wt B y$.
Since $\wt G x$ and $\wt H y$ cannot both contain $\ul e$, we can assume $\ul e \notin \wt G x$, which implies $\ul e \neq \tilde e x$ and thus $e\in A$. Now on the one hand $\ul A \cup \ul B = \wt A x \cup \wt B y$ implies $\ul e \in \wt B y$. But on the other hand, $\ul A$ and $\ul B$ being disjoint forces $e\notin B$, which means $\ul e = \tilde e y = y$. Taken together, $\ul e \in \wt B \ul e$. Since the orbit $\wt H \ul e$ has trivial stabilisers, this shows $e\in B$, a contradiction.
\end{proof}
For infinite groups that are not one-ended, the statement of Theorem~\ref{thm:end} can fail. Since the number of ends is then either two or infinite, the next proposition illustrates both cases according to whether the rank below is $n=1$ or $n\geq 2$.
\begin{prop}
Let $G,H$ be infinite groups.
If one of them is a free group on $n\geq 1$ generators, then the canonical projection $G\pv H \to G\times H$ splits.
\end{prop}
\begin{proof}
Let $F_n$ be a free group on $n\geq 1$ generators. We shall start by defining a $F_n$-action on $F_n$, which we denote by $(g, q)\mapsto \tilde g q$ for $g,q\in F_n$. We further write $\fhi_g$ for the map $F_n\to F_n$ defined by $\fhi_g(q) = \tilde g( g\inv q)$. Since $F_n$ is free, we can specify the action by defining it on a set of free generators. Given a generator $g$, we choose any $g'\neq e,g$ in $F_n$. We define $\tilde g$ by $\tilde g e=e$, $\tilde g g\inv = g'$, $\tilde g (g\inv g') =g$ and $\tilde g q = gq$ in all other cases. Then the map $\fhi_g$ is the cycle $(g; e; g')$ and hence the permutation $\tilde g$ has the following two properties:
\begin{enumerate}
\item $\tilde g e = e$,
\item $\fhi_g$ is a finitely supported permutation of $F_n$ which is even.
\end{enumerate}
Now we observe that these two conditions hold in fact for all $g\in F_n$; this is clear for the first one. As to the second condition, it is inherited from the generators because of the relation
$$\fhi_{ab} = \fhi_a \circ a \circ \fhi_b \circ a\inv$$
which holds for all $a,b\in F_n$.
Turning to the setting of the proposition, suppose now that $G=F_n$ and consider the $G$-action on $\ul G \cup \ul H$ given by the above on $\ul G$ and trivial on $\ul H$; this is well-defined since $\ul e$ is fixed. The second condition shows that this gives a lift $\wt G$ of $G$ in $G\pv H$ because of the construction of the epimorphism $G\pv H\to G$ in the proof of Proposition~\ref{prop:epi}. Since this lift commutes with (the canonical image of) $H$, we have indeed a lift of $G\times H$.
\end{proof}
By construction, each factor $G$ and $H$ in $G\pv H$ lies above the corresponding factor in $G\times H$. In other words, the lifting obstruction of Theorem~\ref{thm:end} really concerns the simultaneous lifting of both factors. Nonetheless, it can be strengthened to hold for the diagonal subgroup when $G=H$:
\begin{thm}\label{thm:end-diago}
Let $G$ be a one-ended finitely generated group.
Then the diagonal subgroup in $G\times G$ cannot be lifted to $G\pv G$.
\end{thm}
Although this statement is of course stronger than the particular case $H=G$ of Theorem~\ref{thm:end}, it will be sufficient to indicate the points where the proof differs from the latter. Again, the statement could also be deduced from results about non-realisability of near actions, specifically using Theorem~7.C.1 in~\cite{Cornulier_near_arx}.
\begin{proof}[Proof of Theorem~\ref{thm:end-diago}]
In order to minimise confusion, we consider two copies $G_1$, $G_2$ of $G$; for each $g\in G$ we write $g_1$ for the corresponding element $(g, e)$ of $G_1\times G_2$ and similarly $g_2=(e, g)$. We suppose for a contradiction that there is a homomorphism $g\mapsto \tilde g$ from $G$ to $G_1 \pv G_2$ such that the image of $\tilde g$ in $G_1\times G_2$ is the diagonal element $g_1 g_2$.
The proof uses the arguments given for Theorem~\ref{thm:end} with minor changes only. We begin with a left Cayley graph for $G$ with respect to a finite symmetric generating set $S\se G$ and delete every edge $\{g, sg\}$ for which either $\tilde s \ul {g_1} \neq \ul {s_1 g_1}$ or $\tilde s \ul {g_2} \neq \ul {s_2 g_2}$ (or both). We have two maps $\chi_i \colon G \to \ul G \cup \ul H$ defined by $\chi_i (g) = \tilde g\inv \ul {g_i}$. The one-end reasoning followed for Theorem~\ref{thm:end} shows that for $i=1,2$ there is $x_i\in \ul{G_1} \cup \ul {G_2}$ (with no indication in which copy of $\ul G$ it lies) such that the set $A_i\subseteq G$ defined by
$$A_i = \left\{ g\in G : \tilde g x_i \neq \ul{g_i}\right\}$$
is finite. We check as above that the $\wt G$-orbits of both $x_i$ are regular. This time the two orbits are disjoint simply because they are orbits of the same group $\wt G$ and cannot coincide since $\wt G x_i$ contains only finitely many points outside $\ul{G_i}$.
At this point we can deduce exactly as for Theorem~\ref{thm:end} that the sets $\wt{A_i} x_i$ are disjoint and lie in $\ul{A_1} \cup \ul {A_2}$, where we wrote simply $\ul{A_i}$ for $\ul{(A_i)_i}$. The end of the proof follows the same strategy, obtaining a contradiction based on the location of $\ul e$ in $\wt{A_1} x_1 \sqcup \wt{A_2} x_2 = \ul{A_1} \sqcup \ul {A_2}$.
\end{proof}
\section{Amenability}\label{sec:amen}
We first record the following basic stability result.
\begin{lem}\label{lem:amen}
If $G$ and $H$ are amenable groups, then so is $G\pv H$.
\end{lem}
\begin{proof}
If $G$ and $H$ are both infinite, then Proposition~\ref{prop:monolith} shows that $G\pv H$ is an extension of two amenable groups: $\Altf(\ul G \cup \ul H)$, which is amenable because it is locally finite, and $G\times H$, which is amenable because both $G$ and $H$ are so.
If $G$ and $H$ are both finite, then so is $G\pv H$ and hence the latter is amenable.
Finally, if exactly one of $G$ or $H$ is infinite, let us assume it is $G$. Then $[G,H]H$ is locally finite by Proposition~\ref{prop:containsaltf} and $G\pv H$ is an extension of $[G,H]H$ by $G$ by Proposition~\ref{prop:epi}.
\end{proof}
We now establish the more surprising statement of Theorem~\ref{thm:co-amen}.
\begin{thm}
Given any two groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item $G$ is a co-amenable subgroup of $G\pv H$.\label{pt:coamen:coamen}
\item Either $H$ is trivial or both $G$ and $H$ are amenable (hence also $G\pv H$).\label{pt:coamen:both}
\end{enumerate}
\end{thm}
\begin{proof}
\eqref{pt:coamen:coamen}$\Longrightarrow$\eqref{pt:coamen:both}. Suppose that $G$, viewed as a subgroup of $G\pv H$, is co-amenable. We can suppose that $H$ is non-trivial. We can also suppose that $G$ is infinite, since otherwise $G$ is amenable and hence so is $G\pv H$ by co-amenability. Write $A=\Altf(\ul G \cup \ul H)$, so that $A=[G,H]\lhd G\pv H$ by Proposition~\ref{prop:containsaltf}. Since we reduced to the case $G$ infinite, $AG=A\rtimes G$ is semidirect, while $AH$ might not be in case $H$ is finite. Likewise, $G\pv H \cong (AH)\rtimes G$ is semidirect.
We begin with the easy part: the amenability of $H$. The subgroup $A G$ is a fortiori co-amenable in $G\pv H$. Being normal, this means that the quotient group is amenable. If $H$ is infinite, this quotient is $H$ by Proposition~\ref{prop:epi}. If $H$ is finite, it is amenable anyway.
We now establish the amenability of $G$. Consider the map
$$S\colon A H \lra \paf(\ul G),\kern5mm S(\sigma) = \ul G \cap \Supp(\sigma).$$
where $\paf$ denotes the set of finite subsets and $\Supp$ the support of a permutation. For instance, $S(e)=\varnothing$ and $S(h)=\{\ul e\}$ if $h\in H$ is non-trivial. We endow $A H$ with the $G\pv H$-action resulting from viewing $A H$ as the coset space $(G\pv H)/G$. That is, $G$ acts by conjugation and $A H$ by the regular left multiplication. On the other hand, we only endow $\paf(\ul G)$ with its natural $G$-action. Then the map $S$ is $G$-equivariant and thus induces a $G$-equivariant map
$$S_* \colon \means(A H) \lra \means(\paf(\ul G))$$
on the space of means (finitely additive probability measures). Note that $G$ fixes a point in the left hand side, since it even fixes a point in the underlying set $A H$ (namely the identity). By co-amenability, it follows that there is a mean $\mu$ on $A H$ fixed by $G\pv H$. In particular, $S_*(\mu)$ is a $G$-invariant mean on $\paf(\ul G)$.
We claim that $S_*(\mu)\left(\left\{E\in \paf(\ul G): \ul g \in E\right\}\right)=1$ holds for every $g\in G$. Indeed, this number is by definition $\mu\left(\left\{\sigma\in A H : \sigma \ul g \neq \ul g\right\}\right)$. Therefore, the claim amounts to showing that $\mu$ assigns mass zero to the set $\left\{\sigma\in A H : \sigma \ul g = \ul g\right\}$. This set is the stabiliser of $\ul g$ in $A H$ for its action on $\ul G \cup \ul H$. Since this is a transitive action on an infinite set, this stabiliser has infinite index. Now the invariance of $\mu$ under $A H$ implies that the mass is indeed zero because each coset is disjoint with equal mass. This justifies the claim.
Our claim establishes that the $G$-action on $\ul G$ is extensively amenable, cf.\ Definition~1.1 in~\cite{JMMBS18}. In particular, this action is amenable by Lemma~2.1 in~\cite{JMMBS18}. This shows that $G$ is an amenable group.
\medskip
As to the implication \eqref{pt:coamen:both}$\Longrightarrow$\eqref{pt:coamen:coamen}, it is immediate since $G\pv H$ is itself amenable when both $G$ and $H$ are (Lemma~\ref{lem:amen}), and $G\pv 1 = G$.
\end{proof}
The next statement contains Proposition~\ref{prop:amen}.
\begin{prop}\label{prop:amen:high}
Given two infinite groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item At least one of $G$ or $H$ is amenable.\label{pt:amen:one}
\item $G\pv H$ admits a faithful highly transitive amenable action.\label{pt:amen:amen}
\item $G\pv H$ admits a faithful doubly transitive amenable action.\label{pt:2:amen}
\end{enumerate}
\end{prop}
\begin{proof}
\eqref{pt:amen:one}$\Longrightarrow$\eqref{pt:amen:amen}.
Suppose that $G$ is amenable and consider the action of $G\pv H$ on $\ul G \cup \ul H$. In view of Corollary~\ref{cor:high}, we only need to justify that it is amenable. Since $G$ is amenable, it admits a sequence $(A_n)$ of left F{\o}lner sets $A_n\se G$. Since $G$ is infinite, we can choose $g_n$ such that $A_n g_n$ does not contain $e$. Note that $(A_n g_n)$ is still a left F{\o}lner sequence in $G$; but now $(\ul{A_n g_n})$ is also a F{\o}lner sequence for the $G \pv H$-action on $\ul G \cup \ul H$ because $H$ fixes $\ul{A_n g_n}$.
\eqref{pt:amen:amen}$\Longrightarrow$\eqref{pt:2:amen} is trivial.
\eqref{pt:2:amen}$\Longrightarrow$\eqref{pt:amen:one}.
If $G\pv H$ admits a faithful doubly transitive amenable action, then we can assume by Corollary~\ref{cor:high} that it is the action on $\ul G \cup \ul H$. Let thus $\mu$ be an invariant mean on $\ul G \cup \ul H$. Then $\mu(\ul G)$ and $\mu(\ul H)$ cannot both vanish since $\mu(\ul G \cup \ul H)=1$. By symmetry we can assume $\mu(\ul G)>0$. After renormalising, we obtain a mean $\mu'$ on $\ul G$ which is still invariant under every element of $G\pv H$ which preserves $\ul G$. In particular, it is invariant under $G$ and witnesses that $G$ is an amenable group.
\end{proof}
We observe that the argument given for \eqref{pt:amen:one}$\Longrightarrow$\eqref{pt:amen:amen} also establishes the following.
\begin{prop}
Let $G$ and $H$ be any groups. If one of $G$ or $H$ is infinite amenable, then $G\pv H$ admits a faithful transitive amenable action.
Moreover, we can take this action to be highly transitive unless the other group is trivial.\qed
\end{prop}
Strengthening amenability to hereditary or extensive amenability flips again the behaviour of $G\pv H$ with respect to $G$ and $H$:
\begin{prop}
Given two infinite groups $G$ and $H$, the following are equivalent.
\begin{enumerate}[(i)]
\item Both $G$ and $H$ are amenable.\label{pt:extamen:both}
\item $G\pv H$ admits a faithful highly transitive extensively amenable action.\label{pt:extamen:ext}
\item $G\pv H$ admits a faithful highly transitive hereditarily amenable action.\label{pt:extamen:her}
\end{enumerate}
\noindent
Moreover, in~\eqref{pt:extamen:ext} and ~\eqref{pt:extamen:her} we can replace high transitivity by double transitivity.
\end{prop}
\begin{proof}
\eqref{pt:extamen:both}$\Longrightarrow$\eqref{pt:extamen:ext}. In view of Corollary~\ref{cor:high}, it suffices to prove that the action of $G\pv H$ on $\ul G \cup \ul H$ is extensively amenable. By Lemma~\ref{lem:amen}, the group $G\pv H$ is amenable. It remains to recall that every action of an amenable group is extensively amenable, see Lemma~2.1 in~\cite{JMMBS18}.
\eqref{pt:extamen:ext}$\Longrightarrow$\eqref{pt:extamen:her} holds by Corollary~2.3 in~\cite{JMMBS18}.
\eqref{pt:extamen:her}$\Longrightarrow$\eqref{pt:extamen:both}. By Corollary~\ref{cor:high}, the action can be taken to be the canonical $G\pv H$-action on $\ul G \cup \ul H$. By definition of hereditary amenability, the $G$-action on every $G$-orbit in $\ul G \cup \ul H$ remains amenable. Applying this to the $G$-orbit $\ul G$, we deduce that $G$ is an amenable group. We argue likewise for $H$.
\medskip
Finally, the modifications needed for double transitivity in place of high transitivity are taken care of by Corollary~\ref{cor:high} as in the proof of Proposition~\ref{prop:amen:high}.
\end{proof}
\section{Property (FA) and the cubical complex}
\begin{thm}\label{thm:FA}
Let $G$ and $H$ be any groups. Then $G\pv H$ has Serre's property~(FA) if and only if both $G$ and $H$ do.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{thm:FA}]
We start with the main case where both $G$ and $H$ are infinite.
Suppose that $G$ and $H$ have property~(FA) and consider an action of $G\pv H$ by automorphisms on a tree $T$. We write $A=\Altf(\ul G \cup \ul H)$. Upon taking the barycentric subdivision, we can assume that $G\pv H$ acts without inversions; in particular, if the set $T^A$ of $A$-fixed points is non-empty, then it forms a subtree. In that case the quotient $G\times H$ acts on $T^A$ and hence we find a fixed point of $G\pv H$ since property~(FA) is closed under finite products by~\cite[\S3.3]{Serre74}.
We can therefore assume that $A$ has no fixed point in $T$. Since every finitely generated subgroup of $A$ is finite, a compactness argument shows that $A$ fixes a point at infinity $\xi\in \partial T$, see Ex.~2 in~\cite[I\S6.5]{Serre77}. We claim that $\xi$ is the unique such point. Indeed, if $\xi'\neq \xi$ is also fixed, then $A$ preserves the entire geodesic line in $T$ with endpoints $\xi, \xi'$. Being locally finite, $A$ must then fix a vertex on this geodesic, contradicting $T^A\neq \varnothing$, whence the claim.
Since $A$ is normal in $G\pv H$, the uniqueness claim implies that $G\pv H$ fixes $\xi$. Let now $x$ be a vertex fixed by $G$ and $y$ a vertex fixed by $H$. Then $G$ fixes the entire geodesic ray from $x$ to $\xi$ and
$H$ the geodesic ray from $y$ to $\xi$. Having the same point at infinity, these rays meet. Any point in their intersection is therefore fixed by both $G$ and $H$ and hence by $G\pv H$.
The converse is clear since $G$ and $H$ are quotients of $G\pv H$ by Corollary~\ref{cor:epi}.
\medskip
We consider now the case where exactly one group, say $G$, is infinite. This time, we write $A$ for the kernel of the canonical epimorphism $G\pv H\to G$ given by Proposition~\ref{prop:epi} and we can argue exactly as above. (For the converse direction, a priori only $G$ is a quotient of $G\pv H$, but $H$ has property~(FA) anyways since it is finite.)
Finally, the case where both $G$ and $H$ are finite is trivial since $G\pv H$ is finite as well.
\end{proof}
The construction of the \cat0 cubical complex consists in applying a classical argument to the action on $\ul G\cup \ul H$, as follows.
\begin{thm}\label{thm:ccc}
Let $G$ and $H$ be infinite groups. Then $G\pv H$ acts by automorphisms on a \cat0 cubical complex $V$ without bounded orbits.
Moreover, each of $G$ and $H$ have fixed vertices in $V$, and there is a unique choice of such vertices that are adjacent.
\end{thm}
In particular, it follows that $G\pv H$ does not have Kazhdan's property~(T), as stated in Theorem~\ref{thm:T}. Indeed, it is well known that an unbounded action on a \cat0 cubical complex gives rise to an unbounded action on a Hilbert space, that is, negates property~(FH), and hence in turn precludes property~(T). This fact has a long history; the most complete reference we know for it is~\cite{Cornulier_FW_arx}.
\begin{proof}[Proof of Theorem~\ref{thm:ccc}]
Consider $\ul G \cup \ul H$ with its canonical $G\pv H$-action. The group $G$ preserves the subset $\ul G$, while the group $H$ preserves $\ul G \smallsetminus\{\ul e\}$. Since $G$ and $H$ generate $G\pv H$, it follows that any $\sigma \in G\pv H$ \textbf{commensurates} the subset $\ul G$, which means by definition that the symmetric difference $\sigma \ul G \triangle \ul G$ is finite. This is a classical setting for the construction of ``walled spaces'' and \cat0 cubical complexes, as initiated notably in~\cite{Sageev95} and~\cite{Haglund-Paulin}; a very general and complete treatment is given in~\cite{Cornulier_FW_arx}. We recall the explicit construction:
Let $V$ be the collection of all subsets $v\se \ul G \cup \ul H$ for which $v\triangle \ul G$ is finite. Consider the (simple, unoriented) graph with vertex set $V$ defined by declaring that $v$ is adjacent to $v'$ whenever $v\triangle v'$ contains exactly one element. The point made in the above references is that this graph is the one-skeleton of a \cat0 cubical complex and the natural $G\pv H$-action on $V$ extends to an action by automorphisms of this complex. By construction, $G$ fixes $\ul G$ when viewed as a vertex, and $H$ fixes the vertex $\ul G \smallsetminus\{\ul e\}$, which is adjacent to $\ul G$.
More generally, a vertex $v\in V$ is fixed by $G$ if and only if $\ul G \se v$; likewise, $v'\in V$ is fixed by $H$ if and only if $v' \se \ul G \smallsetminus \{\ul e\}$. This shows that the pair $(\ul G , \ul G \smallsetminus \{\ul e\})$ is the unique adjacent choice.
Finally, the $G\pv H$-orbits are unbounded because the combinatorial distance $\left|\sigma \ul G \triangle \ul G\right|$ is unbounded, for instance by high transitivity (Corollary~\ref{cor:high}).
\end{proof}
The action on the above complex $V$ can be described further; the following shows in particular that the orbits of $G\pv H$ in $V$ coincide with the orbits of its normal subgroup $[G,H]$.
\begin{thm}
Consider $s\colon V \to \ZZ$ defined by $s(v) = \left|v\smallsetminus \ul G\right| - \left|\ul G \smallsetminus v\right|$.
\begin{enumerate}[(i)]
\item The map $s$ is a $G\pv H$-invariant surjection.\label{pt:ccc:inv}
\item The fibers $V_n=s\inv(\{n\})$, with $n\in \ZZ$, coincide with the orbits of $G\pv H$ as well as with the orbits of $[G,H]$ in $V$.\label{pt:ccc:orbits}
\item The orbit $V_n$ has a unique $G$-fixed point if $n=0$, infinitely many if $n>0$, and none if $n<0$.\label{pt:ccc:pos}
\item The orbit $V_n$ has a unique $H$-fixed point if $n=-1$, infinitely many if $n<-1$, and none if $n>-1$.\label{pt:ccc:neg}
\end{enumerate}
\end{thm}
\begin{proof}
\eqref{pt:ccc:inv} The map $s$ is onto by the definition of $V$. Note that $s$ is $G$-invariant because the $G$-action preserves both $\left|v\smallsetminus \ul G\right|$ and $\left|\ul G \smallsetminus v\right|$. Consider the following variant $s'$ of $s$:
$$s'(v) = \left|v\smallsetminus (\ul G \smallsetminus \ul \{e\}) \right| - \left|(\ul G \smallsetminus \ul \{e\}) \smallsetminus v\right|.$$
For the same reason as above, $s'$ is $H$-invariant. Therefore, the $G\pv H$-invariance of $s$ will follow if we prove $s'=s+1$. This, however, follows readily by distinguishing the cases where $\ul e$ belong to $v$ or does not. In the former case, $\left|v\smallsetminus (\ul G \smallsetminus \ul \{e\}) \right| = \left|v\smallsetminus \ul G\right| +1$ and $\left|(\ul G \smallsetminus \ul \{e\}) \smallsetminus v\right| = \left|\ul G \smallsetminus v\right|$. In the latter, $\left|v\smallsetminus (\ul G \smallsetminus \ul \{e\}) \right| = \left|v\smallsetminus \ul G\right|$ and $\left|(\ul G \smallsetminus \ul \{e\}) \smallsetminus v\right| = \left|\ul G \smallsetminus v\right|-1$.
\medskip
\noindent
\eqref{pt:ccc:orbits} Because of~\eqref{pt:ccc:inv}, it suffices to prove that $[G, H]$ acts transitively on $s\inv(\{n\})$ for any $n\in \ZZ$. We shall use the fact that $[G, H]$ coincides with $\Altf(\ul G \cup \ul H)$ by Proposition~\ref{prop:containsaltf}.
Consider first $n\geq 0$. Choose some $n$-element set $A_n\se \ul H \smallsetminus\{\ul e\}$ and let $v_n =\ul G \cup A_n$, noting $s(v_n) = n$. Consider now any element $v'\in V_n$; we seek $\sigma\in \Altf(\ul G \cup \ul H)$ with $\sigma v' = v_n$. Define the finite sets $B=v'\smallsetminus \ul G$ and $C=\ul G \smallsetminus v'$. Thus $|B| - |C| = n$ and we can choose a subset $B'\se B$ with $|B'|=|C|$, whence also $|B\smallsetminus B'|=n$. The high transitivity of $\Altf(\ul C \cup \ul H)$ on $\ul C \cup \ul H$ implies that there is $\sigma$ in $\Altf(\ul C \cup \ul H)$ satisfying $\sigma(B') = C$ and $\sigma(B\smallsetminus B')=A_n$. We now consider $\sigma$ as an element of $\Altf(\ul G \cup \ul H)$ fixing $\ul G \smallsetminus \ul C$; then indeed $\sigma v' = v_n$ as required.
The case $n<0$ is similar. We choose a $|n|$-element set $A_n\se \ul G$ and let $v_n =\ul G \smallsetminus A_n\in V_n$. Given any $v'\in V_n$, define again $B=v'\smallsetminus \ul G$ and $C=\ul G \smallsetminus v'$. This time there is $C'\se C$ with $|C'|=|B|$ and $|C\smallsetminus C'|=n$ and we find $\sigma$ with $\sigma(C')=B$ and $\sigma(C\smallsetminus C')=A_n$.
\medskip
\noindent
Now that we know that the $G\pv H$-orbits are exactly the sets $V_n$, the points~\eqref{pt:ccc:pos} and~\eqref{pt:ccc:neg} follow from the observation made in the proof of Theorem~\ref{thm:ccc} that $v\in V$ is fixed by $G$ if and only if $\ul G \se v$, and fixed by $H$ if and only if $v \se \ul G \smallsetminus \{\ul e\}$.
\end{proof}
\begin{rem}
One important difference between the complex $V$ for $G\pv H$ and the Bass--Serre tree of $G*H$ is that vertex stabilisers in $G\pv H$ are much larger than conjugates of $G$ or $H$. For instance, the stabiliser of the vertex $\ul G$ maps onto $G\times H$ under the canonical projection.
Indeed, on the one hand this stabiliser contains $G$. On the other hand, given any non-trivial $h\in H$ we construct a lift $\tilde h\in G\pv H$ as follows. Choose any $h'\neq h, e$ in $H$ and consider the cycle $\sigma = \tricycle{h}{e}{h'}$, which is in $G\pv H$ by Proposition~\ref{prop:containsaltf}. Then the element $\tilde h=\sigma h $ of $G\pv H$ fixes the vertex $\ul G$ (it even fixes every element of $\ul G$).
\end{rem}
\section{Naturality}\label{sec:nat}
The construction of $G\pv H$ cannot be functorial in the usual sense with respect to the factors $G, H$. Specifically, given group homomorphisms $\alpha\colon G\to G'$ and $\beta\colon H\to H'$, we cannot expect to obtain a homomorphism $G\pv H \to G' \pv H'$ compatible with $\alpha$ and $\beta$ under the canonical inclusions. Indeed, an obstruction arises from the \emph{kernel} of $\alpha$ or $\beta$. This is obvious in the case of finite groups since $G\pv H$ is then typically simple by Theorem~\ref{thm:finite}. Similar examples with infinite groups can readily be given using the fact the $G\pv H$ is monolithic, and a combinatorial obstruction is proposed in Example~\ref{ex:obs} below.
On the other hand, as soon as we exclude kernels, the construction of $G\pv H$ retains as much functoriality as possible. In other words, given subgroups $K<G$ and $L<H$, there is a clear and natural relation between $K\pv L$ and $G\pv H$. This is particularly transparent in the case of infinite groups, where we shall show the following.
\begin{prop}\label{prop:nat-inf}
Consider two groups $G$, $H$ and subgroups $K<G$, $L<H$.
If $K$ and $L$ are infinite, then the canonical embeddings of $K$ and $L$ into $G\pv H$ extend to an embedding $K\pv L \to G\pv H$.
\end{prop}
For finite groups, there is still an additional complication and the above statement does not hold (Example~\ref{exam:nat} below). The naturality can then be expressed in the following weaker form:
\begin{lem}\label{lem:nat-epi}
Consider two groups $G$, $H$ and subgroups $K<G$, $L<H$. Let $\langle K, L \rangle < G\pv H$ be the subgroup generated by the canonical images of $K$ and $L$ in $G\pv H$.
Then there is an epimorphism $\langle K, L \rangle \twoheadrightarrow K\pv L$ compatible with the embeddings of $H$ and $K$.
\end{lem}
In other words, the canonical epimorphism $K*L\twoheadrightarrow K\pv L$ factors through $\langle K, L \rangle$.
\begin{proof}[Proof of Lemma~\ref{lem:nat-epi}]
Consider the subset $\ul K \cup \ul L$ of $\ul G \cup \ul H$. This set is invariant under $K$ and $L$ and hence under $\langle K, L \rangle$. Therefore, we obtain by restriction an action of $\langle K, L \rangle$ on $\ul K \cup \ul L$ which coincides with the action defining $K\pv L$.
\end{proof}
The fact that this restriction epimorphism is in general not injective explains that it cannot be inverted to give an embedding as in Proposition~\ref{prop:nat-inf} in the case of finite groups:
\begin{exam}\label{exam:nat}
Consider non-trivial finite groups $G$, $H$ and a subgroup $K<G$ with $2<|K| <|G|$. Define $L=H$.
As in the proof of the Lemma~\ref{lem:nat-epi}, $\langle K, L \rangle$ preserves the subset $\ul K \cup \ul L$ of $\ul G \cup \ul H$. Its complement is therefore also invariant under $\langle K, L \rangle$. This complement is $\ul G \smallsetminus \ul K$, which consists of a non-zero number of cosets of $K$ in $G$. The group $L$ acts trivially on this set, while the $K$-action on each coset is regular. Therefore, we obtain an epimorphism $\langle K, L \rangle\twoheadrightarrow K$.
On the other hand, there cannot be any epimorphism $K \pv L\twoheadrightarrow K$ since otherwise Theorem~\ref{thm:finite} would force $K$ to have order at most two.
(A similar obstruction occurs even with $L$ infinite, if the latter has no quotient isomorphic to $K$. Indeed, since $[K,L]$ is the monolith of $K \pv L$, Proposition~\ref{prop:epi} implies that the only proper quotients of $K \pv L$ are the quotients of $L$ and, possibly, a cyclic group of order two.)
\end{exam}
Now in order to prove Proposition~\ref{prop:nat-inf}, it suffices to apply twice the following asymmetric version (where $H$ is allowed to be finite).
\begin{prop}\label{prop:nat-inf-bis}
Consider two groups $G$, $H$ and a subgroup $K<G$.
If $K$ is infinite, then the canonical embeddings of $K$ and $H$ into $G\pv H$ extend to an embedding $K\pv H \to G\pv H$.
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop:nat-inf-bis}]
If we retain the notation of Lemma~\ref{lem:nat-epi} and its proof (with $L=H$), what we have to show is that the action of $\langle K, H \rangle$ on the subset $\ul K \cup \ul H$ of $\ul G \cup \ul H$ is faithful. Let thus $\sigma\in \langle K, H \rangle$ be an element in the kernel of this action. Consider the morphism $\pi_G\colon G\pv H \to G$ of Proposition~\ref{prop:epi}, which is defined since $G$ is infinite. The construction of $\pi_G$ given in the proof of Proposition~\ref{prop:epi} shows that $\sigma$ acts on all but finitely many points of $\ul G$ as the multiplication by $\pi_G(\sigma)$. Since $\ul K$ is infinite, it follows from the choice of $\sigma$ that $\pi_G(\sigma)$ is trivial.
What we have to show is that $\sigma$ acts trivially on $\ul G \smallsetminus \ul K$. Since $H$ fixes this set and $K$ preserves it, the action of $\langle K, H \rangle$ there is given by $\pi_G$; this concludes the proof.
\end{proof}
Now that we know that $G\pv H$ behaves well towards embeddings of infinite groups, it makes sense to consider its compatibility with directed unions of groups --- which are nothing but the concrete realisation of inductive limits with injective structure maps.
Recall thus that $\sK$ is a \textbf{directed} family of subgroups $K<G$ of a group $G$ if any two elements of $\sK$ are contained in a further element of $\sK$.
Given Proposition~\ref{prop:nat-inf}, the following statement follows from the fact that $G\pv H$ is generated by $G$ and $H$.
\begin{prop}\label{prop:union}
Suppose that $G$ is the union of a directed family $\sK$ of infinite subgroups and likewise $H$ of a family $\sL$.
Then the embeddings of Proposition~\ref{prop:nat-inf} realise $G\pv H$ as the union of the directed family of subgroups $K\pv L$ with $K\in \sK$, $L\in \sL$.\qed
\end{prop}
\begin{exam}\label{exam:GG_0}
Let $G_0$ be an infinite group and define the sequence $G_n$ by $G_{n+1} = G_n \pv G_0$. We thus obtain a group $G=\lim_n G_n$ satisfying $G \cong G\pv G_0$.
\end{exam}
There is an asymmetric version of Proposition~\ref{prop:union} where $H$ may be finite.
\begin{prop}\label{prop:union:bis}
Suppose that $G$ is the union of a directed family $\sK$ of infinite subgroups and let $H$ be any group.
Then the embeddings of Proposition~\ref{prop:nat-inf-bis} realise $G\pv H$ as the union of the directed family of subgroups $K\pv H$ with $K\in \sK$.\qed
\end{prop}
As mentioned in the introduction, the operation~$\pv$, while it satisfies an obvious commutativity isomorphism, is not associative. We first explain this in the case of infinite groups since this is where $G\pv H$ still resembles a product.
\begin{prop}
Let $G$, $H$ and $J$ be infinite groups. There is no homomorphism
$$f\colon G \pv \left(H \pv J\right)\lra \left(G \pv H\right) \pv J$$
compatible with the canonical inclusions of $G$, $H$ and $J$ into each of the two sides.
\end{prop}
\begin{proof}
In the right hand side, the normal closure $N_\mathrm{right}(J)$ of $J$ meets trivially $(G \pv H)$ by Proposition~\ref{prop:epi}. Therefore, the images of $G$ and $H$ in the right hand side generate $(G \pv H)$ also modulo $N_\mathrm{right}(J)$. In particular, non-trivial elements of $G$ and $H$ do not commute in that quotient, see Proposition~\ref{prop:tricycle}. By contrast, in the left hand side, the normal closure $N_\mathrm{left}(J)$ of $J$ contains the monolith $\left[G, (H \pv J)\right]$ which contains $[G,H]$; hence $G$ and $H$ commute in the respective quotient. This shows that $f$ cannot exist.
\end{proof}
For finite groups, the non-associativity can even be detected at the level of the order of the groups:
\begin{exam}
Let $G$, $H$ and $J$ be non-trivial finite groups of order $p$, $q$ and $r$ respectively. In order to avoid distinguishing cases in Theorem~\ref{thm:finite}, assume that the $2$-Sylow subgroups are not cyclic non-trivial (for instance, assume $p,q,r$ odd). Then $G\pv H = \Alt(p+q-1)$ still satisfies this property, because its $2$-Sylow subgroups have solubility length~$\geq 2$ (see Theorem~3 in~\cite{Dmitruk-Sushanskii}). It follows that $(G\pv H)\pv J$ is the alternating group on $\frac12 (p+q-1)! + r-1$ elements, and likewise $G\pv (H\pv J)$ on $\frac12 (r+q-1)! + p-1$ elements. The former number is strictly greater than the latter if and only if $p>r$, which shows that the two products have different orders as soon as $G$ and $J$ do.
\end{exam}
Finally, we return to the lack of naturality with respect to non-injective morphisms with a simple combinatorial observation.
\begin{exam}\label{ex:obs}
Let $G$ and $H$ be any groups. Suppose that $g, g'\in G$ are distinct non-trivial elements and likewise $h, h'\in H$. Recall from Proposition~\ref{prop:tricycle} that the commutator $[g,h]$ in $G\pv H$ is the cycle $\tricycle{e}{g}{h}$. It follows that the element $\sigma= [g,h] [g', h']$ is the cycle $(\ul e; \ul g'; \ul h'; \ul g; \ul h)$ which has order~$5$.
If on the other hand $g=g'\neq e$ (but $h, h'$ are still distinct and non-trivial), then $\sigma$ has order~$2$ because one checks that it is the product of the transpositions $(\ul e; \ul h)$ and $(\ul g; \ul h')$.
If finally $g=g'\neq e$ and $h=h'\neq e$ then Proposition~\ref{prop:tricycle} implies that $\sigma$ has order~$3$.
Since $5$, $3$ and $2$ are coprime, we see that no pair of homomorphisms defined on $G$ and $H$ can be extended to a map on $G\pv H$ that respects these combinatorics unless it is injective (or trivial) on $\{g, g'\}$ and on $\{h, h'\}$.
\end{exam}
\section{Approximations by finite groups}\label{sec:LEF}
We turn our attention to the class of groups locally embeddable into finite groups (LEF) introduced in~\cite{Vershik-Gordon}. We first justify the corollary to Theorem~\ref{thm:LEF}, namely that $G \pv H$ is never finitely presented when $G$ and $H$ are residually finite and infinite.
\begin{proof}[Proof of Corollary~\ref{cor:rf}]
By Theorem~\ref{thm:LEF}, $G \pv H$ is LEF. If it is also finitely presented, then it must be residually finite (Theorem p.~58 in~\cite{Vershik-Gordon}). This is impossible when $G$ and $H$ are infinite, see Remark~\ref{rem:resfin}.
\end{proof}
We now establish the main result of this section.
\begin{proof}[Proof of Theorem~\ref{thm:LEF}]
The main case is when both $G$ and $H$ are infinite countable; we begin with this situation. We endow $G$ and $H$ with proper lengths $\ell_G$ and $\ell_H$ respectively. We recall that this means $\ell_G(g)$ is the distance $d(g,e)$ for some proper left-invariant distance $d$ on $G$, while properness means that all balls are finite. This exists for every countable group; for instance, $\ell_G(g)$ can be taken to be a suitably weighted word length. We write $B_G (n)$ and $B_H (n)$ for the corresponding closed balls of radius $n \in \NN$. Define $C_n \se \ul G \cup \ul H$ to be the set $\ul{B_G (n)} \cup \ul{B_H (n)}$. Let $F_n \se G \pv H$ be the set $B_G (n) B_H (n) \Alt(C_n)$, i.e.
\begin{equation*}
F_n = \left\{gha \ \middle| \ \ell_G (g) \leq n,\ \ell_H (h) \leq n,\ \Supp(a) \subseteq C_n\right\}.
\end{equation*}
Observe that $G \pv H = \bigcup_n F_n$ since $G \pv H = G \ltimes ( H \ltimes \Altf(\ul G \cup \ul H))$. Moreover, for any $\sigma \in F_n$, the elements $g \in G, h \in H$ and $a \in \Alt(C_n)$ such that $\sigma = gha$ are uniquely determined.
We record for later use that, if $\sigma_i = g_i h_i a_i \in F_n$ with $g_i \in B_G (n)$, $h_i \in B_H (n)$ and $a_i \in \Alt(C_n)$ ($i = 1, 2$), then
\begin{align}\label{eq:sof}
\sigma_1 \sigma_2 &= g_1 h_1 a_1 g_2 h_2 a_2 \notag \\
&= g_1 h_1 (g_2 h_2) (g_2 h_2)\inv a_1 g_2 h_2 a_2 \notag \\
&= (g_1 g_2) h_1 [h_1\inv, g_2\inv] h_2 (g_2 h_2)\inv a_1 g_2 h_2 a_2 \notag \\
&= (g_1 g_2) (h_1 h_2) \left\{ \left(h_2\inv [h_1\inv, g_2\inv] h_2 \right) \left( (g_2 h_2)\inv a_1 (g_2 h_2) \right) a_2 \right\}.
\end{align}
One the one hand, $g_1 g_2 \in B_G (2n)$ and $h_1 h_2\in B_H (2n)$. On the other hand, $[h_1\inv, g_2\inv] \in \Alt(C_n)$ by Proposition~\ref{prop:tricycle} and moreover the conjugate of $\Alt(C_n)$ by any element of $B_G (n)$ or of $B_H (n)$ remains in $\Alt(C_{2n})$. Therefore, the calculation~\eqref{eq:sof} shows in particular that $F_n F_n \subseteq F_{2n}$. Our goal is to build, for each $n$, an injective map from $F_{2n}$ into a finite group that is multiplicative on $F_n$.
Let $G_n$ and $H_n$ be respective finite quotients of $G$ and $H$ such that the associated epimorphisms $\pi_{G_n}\colon G \rightarrow G_n$ and $\pi_{H_n}\colon H \rightarrow H_n$ are injective on $B_G (4n)$ and on $B_H (4n)$. Let $\pi\colon \ul G \cup \ul H \rightarrow \ul{G_n} \cup \ul{H_n}$ be the (set-theoretic) surjection defined by $\pi(\ul g) = \ul{\pi_{G_n} (g)}$ and $\pi(\ul h) = \ul{\pi_{H_n} (h)}$ for $g \in G$ and $h \in H$. Observe that, by construction, the map $\pi$ enjoys a weak form of equivariance in the following sense:
\begin{equation}\label{eq:sofequiv}
\begin{aligned}
\pi (g z) &= \pi_{G_n} (g) \pi(z) && \text{for any } g \in G \text{ and } z \in \ul G \cup \left(\ul H \smallsetminus \ul{\ker \pi_{H_n}} \right), \\
\pi (h z) &= \pi_{H_n} (h) \pi(z) && \text{for any } h \in H \text{ and } z \in \left( \ul G \smallsetminus \ul{\ker \pi_{G_n}} \right) \cup \ul H.
\end{aligned}
\end{equation}
Indeed, if $z \in \ul G$, then $gz \in \ul G$ and $\pi(gz) = \pi_{G_n} (gz) = \pi_{G_n} (g) \pi_{G_n} (z) = \pi_{G_n} (g) \pi (z)$. If on the other hand $z \in \ul H \smallsetminus \ul{\ker \pi_{H_n}}$, then $gz = z$ and $\pi(z) = \pi_{H_n} (z) \in \ul{H_n} \smallsetminus \{ \ul e \}$, hence $\pi(gz) = \pi(z) = \pi_{G_n} (g) \pi(z)$. Similar computations hold for $H$.
We write $C'_k = \pi(C_k)$, observing that, by our choices of $G_n$ and $H_n$, the map $\pi$ induces a bijection between $C_k$ and $C'_k$ for any $k \leq 4n$.
Let finally $\Phi_n\colon F_{2n} \rightarrow G_n \pv H_n$ be defined by $\Phi_n (gha) = \Phi_n(g) \Phi_n (h) \Phi_n (a)$, where
\begin{align*}
\Phi_n (g) &= \pi_{G_n} (g) && \text{for } g \in G, \\
\Phi_n (h) &= \pi_{H_n} (h) && \text{for } h \in H, \\
\Phi_n (a)(x) &= \begin{cases} \pi (a (y)) & \text{if } x \in C'_{2n}, \text{where } y \in C_{2n}, \pi(y) = x \\ x & \text{if } x \not\in C'_{2n} \end{cases} && \text{for } a \in \Alt(C_{2n}).
\end{align*}
Observe that $\Phi_n (a)$ is indeed well-defined since $a$ preserves $C_{2n}$ and $\pi$ induces a bijection between $C_{2n}$ and $C'_{2n}$. Moreover, the relation
\begin{equation}\label{eq:sofphialt}
\Phi_n (a) (\pi(y)) = \pi (a (y))
\end{equation}
holds more generally for any $y \in C_{4n}$. Indeed, if $y \in C_{4n} \smallsetminus C_{2n}$, then $\pi(y) \in C'_{4n} \smallsetminus C'_{2n}$, hence $\Phi_n (a)$ fixes $\pi(y)$. But $a$ fixes $y$ also since $\Supp (a) \subseteq C_{2n}$.
\bigskip
Let us first prove that $\Phi_n$ is injective. Let $gha$ and $g'h'a'$ in $F_{2n}$ be such that $\Phi_n (gha) = \Phi_n (g'h'a')$. Applying the equality
\begin{equation*}
\Phi_n (g) \Phi_n (h) \Phi_n (a) x = \Phi_n (g') \Phi_n (h') \Phi_n (a') x \qquad \qquad (x \in \ul{G_n} \cup \ul {H_n})
\end{equation*}
first to some $x \in \ul {G_n} \smallsetminus C'_{2n}$ then to some $x \in \ul {H_n} \smallsetminus C'_{2n}$ yields successively
\begin{equation*}
\Phi_n (g) = \Phi_n (g') \qquad \text{and} \qquad \Phi_n (h) = \Phi_n (h'),
\end{equation*}
since $\Phi_n (a)$ and $\Phi_n (a')$ have support in $C'_{2n}$ and since $G_n$ (resp.~$H_n$) acts regularly on $\ul {G_n}$ (resp.~on $\ul {H_n}$) and trivially on $\ul {H_n} \smallsetminus \{\ul e \}$ (resp.~on $\ul {G_n} \smallsetminus \{\ul e \}$). As $\pi_{G_n}$ and $\pi_{H_n}$ are injective on $B_G (2n)$ and on $B_H (2n)$, respectively, we conclude that $g = g'$ and $h = h'$. Lastly, the equality
\begin{equation*}
\Phi_n (a) x = \Phi_n (a') x \qquad \qquad (x \in \ul{G_n} \cup \ul {H_n})
\end{equation*}
becomes
\begin{equation*}
\pi(a(y)) = \pi(a'(y))
\end{equation*}
for any $y \in C_{2n}$, hence $a = a'$ since $\pi$ is injective on $C_{2n}$, which is the support of both $a$ and $a'$. Hence $\Phi_n$ is indeed injective.
Our goal is now to prove that $\Phi_n$ is multiplicative on $F_n$, i.e.~that if $\sigma_1, \sigma_2 \in F_n$, then $\Phi_n (\sigma_1 \sigma_2) = \Phi_n (\sigma_1) \Phi_n (\sigma_2)$. Obviously, $\Phi_n$ is multiplicative on $B_G (n)$, on $B_H (n)$ and on $\Alt (C_n)$. Hence, in view of Eq.~\eqref{eq:sof}, we are left to prove the following equality:
\begin{multline}\label{eq:sofmult}
\Phi_n \left(h_2\inv [h_1\inv, g_2\inv] h_2 \right) \Phi_n \left( (g_2 h_2)\inv a_1 (g_2 h_2) \right) = \\ \left(\Phi_n(h_2)\inv [\Phi_n (h_1)\inv, \Phi_n (g_2)\inv] \Phi_n (h_2) \right) \left( (\Phi_n (g_2 ) \Phi_n (h_2))\inv \Phi_n (a_1) (\Phi_n (g_2) \Phi_n(h_2) ) \right)
\end{multline}
We begin by the first term of each product. The permutation $h_2\inv [h_1\inv, g_2\inv] h_2$ is the $3$-cycle $\tricycle{h_2\inv}{h_2\inv h_1\inv}{g_2\inv}$. Each point of this cycle belongs to $C_{2n}$, hence
\begin{equation*}
\Phi_n (h_2\inv [h_1\inv, g_2\inv] h_2) = \tricycle{\pi_{H_n} (h_2\inv)}{\pi_{H_n} (h_2\inv h_1\inv)}{\pi_{G_n}(g_2\inv)}.
\end{equation*}
On the other hand,
\begin{align*}
\Phi_n(h_2)\inv [\Phi_n (h_1)\inv, \Phi_n (g_2)\inv] \Phi_n (h_2) &= \pi_{H_n}(h_2)\inv [\pi_{H_n} (h_1)\inv, \pi_{G_n} (g_2)\inv] \pi_{H_n} (h_2) \\
&= (\ul{\pi_{H_n} (h_2\inv)} ; \pi_{H_n} (h_2\inv) \ul{\pi_{H_n}(h_1\inv)} ; \ul{\pi_{G_n}(g_2\inv)}).
\end{align*}
But thanks to the equivariance~\eqref{eq:sofequiv}, we have
\begin{equation*}
\pi_{H_n} (h_2\inv) \ul{\pi_{H_n}(h_1\inv)} = \pi_{H_n} (h_2\inv) \pi (h_1\inv) = \pi (h_2\inv h_1\inv) = \ul{\pi_{H_n} (h_2\inv h_1 \inv)},
\end{equation*}
hence
\begin{equation*}
\Phi_n \left(h_2\inv [h_1\inv, g_2\inv] h_2 \right) = \Phi_n(h_2)\inv [\Phi_n (h_1)\inv, \Phi_n (g_2)\inv] \Phi_n (h_2),
\end{equation*}
as needed.
We now turn our attention to the second terms of each side of \eqref{eq:sofmult}. Let $x \in C'_{2n}$ and $y \in C_{2n}$ such that $\pi(y) = x$.
Since, on the one hand, $C_{4n} \se \left( \ul G \smallsetminus \ul{\ker \pi_{G_n}} \right) \cup \left(\ul H \smallsetminus \ul{\ker \pi_{H_n}} \right) \cup \{ \ul e\}$ and, on the other hand, $g_2 h_2 C_{2n} \se C_{3n}$ and $(g_2 h_2)\inv C_{3n} \se C_{4n}$ we can use the equivariance relations~\eqref{eq:sofequiv} and the fact that $a_1$ preserves $C_{3n}$ (or indeed any superset of $C_n$) in order to compute:
\begin{align*}
\Phi_n \left( (g_2 h_2)\inv a_1 (g_2 h_2) \right)(x) &= \pi \left( \left\{ (g_2 h_2)\inv a_1 (g_2 h_2) \right\}(y) \right) \\
&= \pi_{H_n} (h_2)\inv \pi_{G_n}(g_2)\inv \pi(a_1 (g_2 h_2 y)). \\
\intertext{As $g_2 h_2 y \in C_{3n}$, we can use the relation~\eqref{eq:sofphialt} and, once again, the equivariance~\eqref{eq:sofequiv}, to conclude the computations:}
\dots &= \pi_{H_n} (h_2)\inv \pi_{G_n}(g_2)\inv \Phi_n(a_1) (\pi_{G_n} (g_2) \pi_{H_n} (h_2) x) \\
&= \left( (\Phi_n (g_2 ) \Phi_n (h_2))\inv \Phi_n (a_1) (\Phi_n (g_2) \Phi_n(h_2) ) \right) (x).
\end{align*}
If now $x \not\in C'_{2n}$, then $\Phi_n (g_2) \Phi_n (h_2) x \not\in C'_n$. Either this points belongs to $C'_{2n} \smallsetminus C'_n$ or it does not. In both cases it is fixed by $\Phi_n (a_1)$ since the support of $a_1$ is included in $C_n$. Hence
\begin{align*}
\left( (\Phi_n (g_2 ) \Phi_n (h_2))\inv \Phi_n (a_1) (\Phi_n (g_2) \Phi_n(h_2) ) \right) (x) &= (\Phi_n (g_2 ) \Phi_n (h_2))\inv (\Phi_n (g_2) \Phi_n(h_2) ) (x) \\
&= x \\
&= \Phi_n \left( (g_2 h_2)\inv a_1 (g_2 h_2) \right)(x)
\end{align*}
(the last line being due to $x \not\in C'_{2n}$).
To sum up, we have checked that
\begin{equation*}
(\Phi_n (g_2 ) \Phi_n (h_2))\inv \Phi_n (a_1) (\Phi_n (g_2) \Phi_n(h_2)) = \Phi_n \left( (g_2 h_2)\inv a_1 (g_2 h_2) \right)
\end{equation*}
and hence that the equality~\eqref{eq:sofmult} holds, namely that $\Phi_n$ is indeed multiplicative on $F_n$. This concludes the proof that $G \pv H$ is LEF when the residually finite groups $G, H$ are both infinite countable.
\medskip
We now extend the statement to the case where $G, H$ are both infinite. The family $\sK$ of all infinite countable subgroups $K<G$ is directed and covers $G$. The corresponding fact holds for the family $\sL$ of infinite countable subgroups $L<H$. Proposition~\ref{prop:union} therefore realises $G\pv H$ as the directed union of all $K\pv L$, which are LEF by the first case. On the other hand, the LEF property passes to directed unions by its very definition.
\medskip
It remains to consider the case where at least one of $G$ or $H$ is finite. If both are finite, then so is $G\pv H$ and LEF holds trivially. We can therefore assume that $G$ is infinite and $H$ finite. As before, we cover $G$ by all its infinite countable subgroups and apply now Proposition~\ref{prop:union:bis}; this reduces us to the case where $G$ is infinite countable and $H$ finite. Now the argument is a simpler variant of the argument given for the main case above, and becomes very close to the argument given for Lemma~3.2 in~\cite{Elek-Szabo06}. Specifically, the above computations still hold with the following minor modifications:
\begin{itemize}
\item $C_n$ is now the set $\ul {B_G (n)} \cup \ul H$.
\item $F_n$ is now $B_G (n) \Alt (C_n)$, or $B_G (n) \Sym(C_n)$ if $H$ has a nontrivial cyclic $2$-Sylow. This distinction is made to ensure that $\bigcup_n F_n = G \pv H$, since the latter is now $G \ltimes ([G, H]H)$ and $[G,H]H$ is $\Altf(\ul G \cup \ul H)$ or $\Symf(\ul G \cup \ul H)$ depending on whether $H$ admits a nontrivial cyclic $2$-Sylow (compare with Theorem~\ref{thm:finite}).
\item $G_n$ is still chosen so that the projection map $\pi_{G_n}$ is injective on $B_G (4n)$ but $H_n$ is simply $H$.
\item The surjection $\pi$ is defined as above (where $\pi_{H_n}$ is thus the identity) and $C'_k$ is again $\pi(C_k)$.
\item The map $\Phi_n$ is defined similarly on $B_G (2n)$ and on $\Alt (C_{2n})$ (or $\Sym (C_{2n})$).
\item The relation that needs to be proved in order to establish that $\Phi_n$ is multiplicative on $F_n$ is now:
\begin{equation*}
\Phi_n (g_1 g_2) \Phi_n (g_2\inv a_1 g_2 a_1) = \Phi_n (g_1) \Phi_n (g_2) \left( \Phi_n (g_2)\inv \Phi_n (a_1) \Phi_n (g_2) \Phi_n (a_1)\right).
\end{equation*}
Since $\Phi_n$ is again multiplicative on $B_G (n)$ and on $\Alt (C_n)$ (or $\Sym(C_n)$), this equation reduces to
\begin{equation*}
\Phi_n (g_2\inv a_1 g_2) = \left( \Phi_n (g_2)\inv \Phi_n (a_1) \Phi_n (g_2)\right).
\end{equation*}
The latter can be proved, as for the second terms of~\eqref{eq:sofmult}, by applying each side to some $x \in \ul {G_n} \cup \ul {H_n}$ and distinguishing between the cases $x \in C'_{2n}$ and $x \not\in C'_{2n}$.
\end{itemize}
\end{proof}
\section{Topologies, completions and generalisations}\label{sec:gen}
Endow the group $\Sym(\ul G \cup \ul H)$ with the topology of pointwise convergence, which is Polish if $G$ and $H$ are countable. Since the group $G \pv H$ acts highly transitively on $\ul G \cup \ul H$ (unless both $G$ and $H$ are finite), it is a dense subgroup of $\Sym(\ul G \cup \ul H)$. An equivalent reformulation is to consider the topology of pointwise convergence on $G \pv H$; then the statement is that the topological group $G \pv H$ admits $\Sym(\ul G \cup \ul H)$ as its (upper) completion.
We shall introduce two other completions $K_G$ and $K_H$ of $G \pv H$, which are (usually non-discrete) Polish groups when $G$ and $H$ are countable, but with the additional property that the diagonal embedding of $G \pv H$ into $K_G \times K_H$ is \emph{discrete}.
Thus, in a very loose sense, the situation is analogous to an arithmetic group (instead of $G \pv H$) with its diagonal embedding into the product of all local completion at all places (instead of $K_G$ and $K_H$).
\medskip
To set up the notation, let $X$ be any set. We recall that a \emph{bornology} on $X$ is a family $\Born \subseteq \Power (X)$ that is closed under subsets and finite unions. We define $\Sym_\Born (X)$ as the subgroup of $\Sym(X)$ that preserves the bornology $\Born$, i.e. $g \in \Sym_\Born (X)$ if
\begin{equation*}
B \in \Born \quad \Longleftrightarrow \quad g(B) \in \Born.
\end{equation*}
We endow $\Sym_\Born$ with the \emph{topology of uniform discrete convergence on $\Born$}, an identity neighbourhood basis of which is given by the family of pointwise fixators $\Fix(B)$ for $B \in \Born$. For instance, if $\Born$ is the family of all finite sets, then $\Sym_\Born (X) = \Sym (X)$ and we recover the usual topology of pointwise convergence.
It can easily be checked by hand (and it follows from the more general Theorem~2.2 in~\cite{Gheysens_ULB_arx}) that the group $\Sym_\Born (X)$ is complete for the upper uniform structure. In particular, the group $\Sym_\Born (X)$ is \emph{completely metrisable} if $\Born$ is \emph{countably generated}, that is if there exists a countable subfamily $\Born' \subseteq \Born$ such that any $B \in \Born$ is a subset of some $B' \in \Born'$.
\medskip
Coming back to $G \pv H$, we observe that this group preserves two natural bornologies, namely the family $\Born_G$ of all subsets $B \se \ul G \cup \ul H$ such that $B \cap \ul G$ is finite, and the analogous family $\Born_H$ of all subsets $B$ such that $B \cap \ul H$ is finite. (There is a third invariant bornology, $\Born_G \cap \Born_H$, which is nothing more than the family of finite subsets.) Let $K_G$ (resp.~$K_H$) be the closure of $G \pv H$ in the topological group $\Sym_{\Born_G} (\ul G \cup \ul H)$ (resp.~$\Sym_{\Born_H} (\ul G \cup \ul H)$). By the above discussion, $K_G$ and $K_H$ are Polish if $G$ and $H$ are countable.
\begin{prop}
The group $K_G$ is non-discrete if (and only if) $G$ is infinite.
\end{prop}
\begin{proof}
If $G$ is finite, then $\Born_G = \Power (\ul G \cup \ul H)$ and hence $\Sym_{\Born_G} (\ul G \cup \ul H)$ is the whole group $\Sym (\ul G \cup \ul H)$ endowed with the discrete topology.
For the other direction, it suffices to exhibit a permutation $\varphi \in K_G$ not in $G \pv H$. Since $G$ is infinite, we can partition it into two infinite subsets $G_2 \sqcup G_3$ and then choose further partitions of $G_2$ into infinitely many pairs and of $G_3$ into infinitely many triples. Let $\varphi$ be the permutation that acts as a transposition on each pair of $\ul{G_2}$, as a $3$-cycle on each triple of $\ul{G_3}$ and as the identity on $\ul H \smallsetminus \{ \ul e \}$. Then, for any $B \in \Born_G$, there is an element of $\Altf (\ul G \cup \ul H)$ that agrees with $\varphi$ on $B$, hence $\varphi$ belongs to $K_G$. But $\varphi$ cannot be an element of $G \pv H$, because there is no cofinite subset of $\ul G$ on which $\varphi$ coincides with the multiplication by an element of $G$, since there are orbits of different sizes.
\end{proof}
\begin{prop}
If $G$ and $H$ are infinite, then the diagonal embedding $G \pv H \rightarrow K_G \times K_H$ has a discrete image.
\end{prop}
\begin{proof}
Indeed, $\ul H \in \Born_G$ and $\ul G \in \Born_H$, hence $\Fix \ul H \times \Fix \ul G$ is an identity neighbourhood of $K_G \times K_H$. This neighbourhood meets the diagonal only in the element $(e, e)$, so the image of $G \pv H$ is discrete.
\end{proof}
\medskip
Finally, we indicate two natural generalisations of the construction $G\pv H$.
\medskip
A first observation is that we can extend the definition to any family $\{G_i\}_{i\in I}$ of groups $G_i$ indexed by an arbitrary set $I$. To this end, we consider again for each $i\in I$ a copy $\ul{G_i}$ of the set underlying $G_i$, take the disjoint union of all these sets and then identify all the neutral elements $\ul e\in\ul{G_i}$. We consider each $G_i$ as a permutation group of the set $X=\bigcup_{i\in I} \ul{G_i}$ by giving it the regular action on $\ul{G_i}$ and the trivial action on the complement $X\smallsetminus \ul{G_i}$. Finally, we define $\bigpv_{i\in I} G_i$ to be the permutation group of $X$ generated by these copies of each $G_i$.
Thus $G_1\pv G_2$ is the special case $\bigpv_{i\in\{1,2\}} G_i$ but we should take care to distinguish $\bigpv_{i\in\{1,2,3\}} G_i$ from the groups $G_1\pv (G_2\pv G_3)$ and $(G_1\pv G_2) \pv G_3$ considered in Section~\ref{sec:nat}. A good number of the results presented in this text admit a straightforward adaptation to this setting.
\medskip
A second generalisation is based on the observation that $G\pv H$ is not merely a group, but comes with a canonical incarnation as a permutation group (on the set $\ul G \cup \ul H$). Moreover, there is a natural choice of a point in this set, namely $\ul e$.
At this juncture we recall that a group $G$ can be considered as a special case of a pointed $G$-set in the following more precise sense. The functor associating to a group $G$ the pointed $G$-set $(\ul G, \ul e)$ is a fully faithful embedding of the category of groups into the category of pointed sets with a group action; this would not be the case for unpointed sets.
To extend the definition of $G\pv H$, we can start more generally with a pointed $G$-set $(X, *)$ and a pointed $H$-set $(Y, *)$. We then form the disjoint union of $X$ and $Y$ but with both copies of $*$ identified and still denote the resulting pointed space as $(X\cup Y, *)$. We consider now the group of permutations of $X\cup Y$ generated by the given images of $G$ and $H$. Some of the results presented in this text extend to this setting, and more generally we can form the pointed set with group action associated to an arbitrary family of groups $G_i$ given each with its pointed $G_i$-set $(X_i, *)$.
\medskip
We note that all these generalisations also support natural topologies. The obvious one is the topology of pointwise convergence inherited from the ambient symmetric group, but as before the various ``factors'' define bornologies $\Born$ for which we can consider the topology of uniform discrete convergence on $\Born$.
\section{Questions}\label{sec:q}
\begin{question}\label{q:GG_0}
Is there a \emph{finitely generated} group $G$ satisfying $G \cong G\pv G_0$ for a non-trivial group $G_0$?
\end{question}
The corresponding question has a non-trivial positive answer in the case of direct products, see Proposition~1 in~\cite{Tyrer-Jones}, and remains open for finitely presented groups~\cite{Hirshon02}. It has a negative answer for free products because of Grushko's theorem~\cite{Grushko}.
If no finite generation is assumed, we have seen in Example~\ref{exam:GG_0} that a straightforward limit argument provides $G\cong G\pv G_0$.
\begin{question}
Does $G\pv H$ have Haagerup's property when both $G$ and $H$ do?
\end{question}
If the answer is positive, then we would expect it to be established with the method of \emph{spaces with measured walls}, compare~\cite{Cherix-Martin-Valette,Cornulier-Stalder-Valette12,Robertson-Steger98}. In that case, we would also expect the corresponding positive answer for the property of admitting a proper action on a \cat0 cube complex (Property~PW in~\cite{Cornulier-Stalder-Valette08}).
\begin{question}\label{q:fp}
Can $G\pv H$ ever be finitely presented when $G$ and $H$ are infinite?
\end{question}
Since $G \pv H$ is monolithic, finite presentability is equivalent to being isolated in the space of marked groups, see Proposition~2 in~\cite{Cornulier-Guyot-Pitsch2007}. As established in Corollary~\ref{cor:rf}, $G\pv H$ is not finitely presented when $G$ and $H$ are infinite and residually finite.
\medskip
Section~\ref{sec:amen} explores several aspects of amenability but we have no complete answer to the following.
\begin{prob}
Characterise the pairs $G$, $H$ such that $G\pv H$ admits a faithful transitive amenable action.
\end{prob}
Recall that a satisfactory characterisation in the case of free products is obtained in~\cite{Glasner-Monod}, while the case of direct products is elementary.
\medskip
In view of Theorem~\ref{thm:LEF}, it is natural to ask:
\begin{question}
Is $G \pv H$ sofic whenever $G$ and $H$ are so?
\end{question}
Recall that direct and free products indeed preserve soficity (Theorem~1 in~\cite{Elek-Szabo06}).
\bibliographystyle{plain}
\bibliography{biblio_prod}
\end{document} | {"config": "arxiv", "file": "2201.03625/produit16.tex"} |
TITLE: Can someone explain how the square root of two negative numbers multiplied together doesn't equal a positive/negative number.
QUESTION [2 upvotes]: On a test I got a question asking to simplify $\sqrt{-25}\times \sqrt{-3}$
I answered $5\sqrt 3$ but apparently it's only $-5\sqrt 3$ because of some rule of imaginary numbers, could someone please explain to me why $\sqrt{-25}\times \sqrt{-3} \ne \pm 5\sqrt 3$
REPLY [3 votes]: You presumably tried to use the "rule" $\sqrt{a} \cdot \sqrt{b} = \sqrt{ab}$. However, this only holds when $a$ and $b$ are positive! It is not applicable here.
Instead, we need to compute this product directly. $\sqrt{-25} = 5i$ and $\sqrt{-3} = i\sqrt{3}$, so $$\sqrt{-25} \sqrt{-3} = (5i)(i \sqrt{3}) = -5\sqrt{3}.$$ | {"set_name": "stack_exchange", "score": 2, "question_id": 4023471} |
TITLE: Finite fields, trace map and the induce map is being a permutation of $ \mathbb F_{2^n}$
QUESTION [1 upvotes]: $n\ge 2$ is an integer
$$tr : \mathbb F_{2^n} \to \mathbb F_{2} \\ tr(x)=x+x^2+x^{2^2}+\cdots x^{2^{n-1}}$$ is the absolute trace map
For fixed constants $\alpha, \theta \in \mathbb F_{2^n} $ define the following map
$$\phi : \mathbb F_{2^n} \to\mathbb F_{2^n}\\ \phi(y)= y+\alpha tr(\theta y)$$
My aim is to show that $\phi(y)$ is a permutation i.e. set bijection if and only if $tr(\alpha \theta)=0$
1.) $tr(\alpha \theta)=0$ iff the map is a permutation of $\mathbb F_{2^n}$
2.) Also $\phi \circ \phi = Identity$ iff the map is a permutation of $\mathbb F_{2^n}$ (Right to left implication is trivial but having bad time about the other implication)
For 1.) I tried to manupule the equation putting the fixed values $\alpha, \theta \in \mathbb F_{2^n} $
and gained $$\phi(\alpha)=\alpha+\alpha tr(\alpha \theta)$$
also tried a lot of variables with additional inverses. The trick I think to think that trace function only has two results $0, 1$ but couldnot finish the proof.
The same manipulations are also tried for 2.) but here for the inverse direction when writing for an arbitrary $x$ $$\phi \phi(x)=x+\alpha tr(x \theta)+\alpha tr(x \theta)+\alpha tr\left( \alpha tr(x \theta)\right)$$
since trace function only has two results $0, 1$ I think I can conclude the last line is equal only to x which shows the statement is true.
Any help, proof, hint would be highly appreciated.
REPLY [2 votes]: $y\ne 0\in \ker \phi$ iff $y = \alpha$ and $tr(\theta \alpha)=1$.
If $tr(\theta \alpha)=0$ then
$\phi(\phi(y))= y+\alpha\ tr(y\theta)+\alpha tr((y+\alpha\ tr(y\theta))\theta) = y+\alpha tr(\alpha tr(y\theta) \theta)= y$ | {"set_name": "stack_exchange", "score": 1, "question_id": 4449215} |
TITLE: Calculate the Rate of Change of the Volume of a Frustum of a Right Circular Cone
QUESTION [2 upvotes]: I have a question for calculus which I'm having trouble with. I've calculated the volume of the frustum with the formula $V=\frac\pi3(R^2+Rr+r^2).$
However, I'm having difficulty deriving this equation in order to find the rate of change of the volume. Would appreciate advice, here is the problem.
"A reservoir containing water has the shape of a frustum of a right circular cone of altitude 10 feet, a lower base of radius 10 feet, and an upper base radius 15 feet. How fast is the volume of the water increasing when the water is 6 feet deep and rising at a rate of 2 ft./hr?"
EDIT:
REPLY [0 votes]: The numbers in the figure are not consistent with the numbers in the setup.
As I read it, $h = 10$, and $H$ or $H+h$ is unknown.
$r = \frac 23 R$ which implies $h = \frac 13 (h+H).$ That is if we extened the theoretical cone above the height tank, the cone would be 30 ft tall.
What we really care about is the radius at the water level. If we say $h$ is actually the height to the surface of the water, and $r$ is the radius of the tank at this height, we can say,
$r = 15 - \frac 12 h$
That way when the tank is full $h = 10$, and $r = 10.$
When the $h = 6, r = 12$
$V = \frac 13\pi (R^2 + Rr + r^2) h$
$\frac {dv}{dh} = \frac 13\pi (R^2 + Rr + r^2) + \frac 13\pi (R^2 + Rr + r^2)'h$
We use the product rule because $r$ is a function of $h$
$\frac {dv}{dh} = \frac 13\pi (R^2 + Rr + r^2) + \frac 13\pi (R\frac {dr}{dh} + 2r \frac {dr}{dh})h$
We have $h, R, r,$ above. We can find $\frac {dr}{dh}$ from $r = 10 - \frac 12 h$ and that is everything to find $\frac {dV}{dh}$.
The chain rule says $\frac {dV}{dt} = \frac {dV}{dh}\frac {dh}{dt}.$ | {"set_name": "stack_exchange", "score": 2, "question_id": 2182640} |
TITLE: Quadratic form defined by a permutation of projection matrix
QUESTION [0 upvotes]: Let $M$ be a projection matrix. We know that
$$
v'Mv=(Mv)'(Mv)=\sum^n_{i=1}\sum^n_{j=1}v_iM_{ij}v_j\leq \sum^n_{i=1}v_i^2
$$
where $v$ is a column vector with dimension compatible to $M$.
Can we conclcude the same thing with following?
$$
v'M'PMv=(Mv)'(PMv)=\sum^n_{i=1}\sum^n_{j=1}v_i[\sum_{k}M_{ik}(PM)_{kj}]v_j\leq \sum^n_{i=1}v_i^2
$$
where $P$ is a permutation matrix.
If the answer is no, can we put addition restriction on $P$ to make it true? For example, if
$$
P=
\begin{pmatrix}
0 & 1 & 0 & 0 &\cdots\\
1 & 0 & 0 & 0&\vdots\\
\vdots & \vdots & \ddots& \ddots &\vdots\\
0 & 0 & \cdots & 0&1\\
0 & 0 & \cdots & 1&0
\end{pmatrix}
$$
REPLY [4 votes]: The result you're looking for holds as a consequence of the Cauchy Schwarz inequality. I will use $\|x\|$ to refer to the norm $\|x\| = \sqrt{x'x}$. We note that $\|Px\| = \|x\|$ for all $x$. Thus, we can apply the CS inequality to get
$$
v'M'PMv = (Mv)'(PMv) \leq \|Mv\| \cdot \|PMv\| = \|Mv\|^2 \leq \|v\|^2.
$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 4513940} |
\begin{document}
\title{Model-Based Iterative Reconstruction \\ for Radial Fast Spin-Echo MRI}
\author{Kai~Tobias~Block,~Martin~Uecker,~and~Jens~Frahm}
\date{}
\maketitle
\begin{abstract}
In radial fast spin-echo MRI, a set of overlapping spokes with an
inconsistent T2 weighting is acquired, which results in an averaged
image contrast when employing conventional image reconstruction
techniques. This work demonstrates that the problem may be overcome
with the use of a dedicated reconstruction method that further allows
for T2 quantification by extracting the embedded relaxation
information. Thus, the proposed reconstruction method directly yields
a spin-density and relaxivity map from only a single radial data set.
The method is based on an inverse formulation of the problem and
involves a modeling of the received MRI signal. Because the solution
is found by numerical optimization, the approach exploits all data
acquired. Further, it handles multi-coil data and optionally allows
for the incorporation of additional prior knowledge. Simulations and
experimental results for a phantom and human brain in vivo demonstrate
that the method yields spin-density and relaxivity maps that are
neither affected by the typical artifacts from TE mixing, nor by
streaking artifacts from the incomplete k-space coverage at individual
echo times.
\smallskip
\noindent \textbf{Keywords.}
turbo spin-echo, radial sampling, non-Cartesian MRI,
iterative reconstruction, inverse problems.
\end{abstract}
\thispagestyle{empty}
\vfill
{\centering
\fbox{\parbox{\textwidth}{
\noindent
{\small
Copyright 2009 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.}
\vspace{0.2cm}
\noindent
Published in final form in: \\
IEEE Transactions on Medical Imaging 28:1759-1769 (2009) \\
DOI: \href{http://dx.doi.org/10.1109/TMI.2009.2023119}{10.1109/TMI.2009.2023119}
}}}
\enlargethispage{3\baselineskip}
\newpage
\section{Introduction}
{F}{ast} spin-echo (FSE) MRI is one of the most frequently
used MRI techniques in today's clinical practice. In the FSE technique,
a train of multiple spin-echoes is generated after each RF excitation.
Therefore, it offers images with proton-density (PD) and T2 contrast at
a significantly reduced measuring time compared to a spin-echo sequence
with a single phase-encoding step per excitation. Although the k-space
lines that are sampled during each train of spin echoes have different
echo times (TE) and, thus, increasing T2 weighting, conventional
Cartesian sampling strategies still allow for a straightforward image
reconstruction with only minor artifacts under visual inspection when
grouping spin echoes with equal echo time around the center of k-space.
The underlying reason is the dominance of major image features by the
centrally located low spatial frequencies, while the periphery of
k-space rather defines edges which are less susceptible to changes in
the contrast weighting. Moreover, in the Cartesian case, the contrast of
the image can be adjusted by reordering the acquisition scheme such that
the central k-space lines are measured with the desired echo time. A
noticeable disadvantage of the T2 attenuation in k-space, however, is a
certain degree of image blurring. It arises because the signal decay can
be formally written as a multiplication of the true k-space data with
some envelope function, which yields a convolution in image space along
the phase encoding direction.
In recent years, radial sampling, as originally proposed by Lauterbur
\cite{Lau1973}, has regained considerable interest as an alternative to
Cartesian sampling. In this technique, coverage of k-space is
accomplished by sampling the data along coinciding spokes instead of
parallel lines. Radial sampling offers several promising advantages that
include a lower sensitivity to object motion \cite{Glo1992,Alt2002}, the
absence of any ghosting effects, and efficient undersampling properties.
In principle, radial trajectories can be combined with almost all MRI
techniques including FSE sequences \cite{Glo1992,Alt2002,Ras1999}. In
this specific case, however, the modified sampling scheme has major
implications for the image reconstruction procedure. Because all spokes
pass through the center of k-space, each spoke carries an equal amount
of low spatial frequency information which, for different echo times,
exhibits pronounced differences in T2 contrast. This situation is
fundamentally different to the Cartesian case and poses complications
when employing conventional reconstruction methods such as filtered
backprojection or gridding \cite{Sul1985}: (i) The merging of spokes
with different echo times may cause streaking artifacts around areas
with pronounced T2 relaxation because the relative signal decay leads to
jumps in the corresponding point-spread-function \cite{The1999}. (ii)
The contrast of the image always represents an average of the varying T2
weightings. (iii) Differently ordered k-space acquisitions may no longer
be used to alternate the image contrast and, thus, to distinguish
between PD and T2 relaxation. On the other hand, because all spokes pass
through the center of k-space and capture the low spatial frequencies at
every echo time, a single radial FSE data set implicitly contains
information about the local signal decay. Therefore, dedicated
reconstruction methods have been developed that extract this embedded
temporal information in order to quantify the T2 relaxation within a
reduced measurement time relative to Cartesian-based T2 quantification
approaches.
Most of the existing techniques, such as k-space weighted image contrast
(KWIC) \cite{Son2000,Alt2005}, attempt to calculate a series of echo
time-resolved images by mixing the low spatial frequencies from spokes
measured at the desired echo time with high frequency information from
other spokes (measured at different echo times). A drawback of these
techniques is that the TE mixing tends to cause artifacts in the
reconstructed images. Although their strength can be attenuated to some
degree with specific echo mixing schemes \cite{Alt2005}, these artifacts
limit the accuracy of the T2 estimates.
In particular, the values become dependent on the object size as the change
in k-space is assumed to be located only in the very central area that is
densely covered by spokes measured at an equal echo time \cite{Alt2005}.
To overcome the limitation, this work demonstrates an iterative concept
for the reconstruction from radially sampled FSE acquisitions. Instead
of calculating any intermediate images, the proposed method estimates a
spin density and a relaxivity map directly from the acquired k-space
data using a numerical optimization technique. Because the procedure
involves a modeling of the received MRI signal that accounts for the
time dependency of the acquired samples, the approach exploits the
entire data for finding these maps without assuming that the contrast
changes are restricted to a very central area of k-space. Related ideas
have been presented by Graff et al \cite{Gra2006} and Olafsson et al
\cite{Ola2008}.
\section{Theory}
The method proposed here can be seen as an extension of a previous
development for the reconstruction from highly undersampled radial
acquisitions with multiple receive coils \cite{Blo2007}. In this work,
the reconstruction is achieved by iteratively estimating an image that,
on the one hand, is consistent with the measured data and, on the other
hand, complies with prior object knowledge to compensate for the missing
data. For multi-echo data from a FSE sequence, this strategy is not
appropriate because it is impossible to find a single image that matches
the different contrasts at the same time. Therefore, it is necessary to
include the relaxation process into the signal modeling used to compare
the image estimate to the measured k-space samples. As the T2 relaxation
time is a locally varying quantity, this requires that the estimate
consists of a spin-density and a relaxation component instead of just an
intensity component, which directly yields a quantification of the
relaxation time. The objective of the present extended approach,
therefore, is to simultaneously find a spin-density map and a relaxivity
map such that snapshots, calculated for each echo time from these maps,
best match the spokes measured at the respective echo times.
\subsection{Cost Function}
In order to compute the maps, a cost function is needed which quantifies
the accuracy or quality of the match to the measured data, such that the
optimal solution can be identified by a minimum value of this function.
Because experimental MRI data is always contaminated by a certain degree
of Gaussian noise, the cost function of the proposed method uses the L2
norm and has the form
\begin{equation}
\label{eq:costfunc}
\Phi(\vec{\rho},\vec{r})=\frac{1}{2}\sum_t \sum_c \left\|\vec{F}(\vec{\rho},\vec{r},t,c) - \vec{y}_{t,c} \right\|^2_2 \;,
\end{equation}
where $\vec{\rho}$ is a vector containing the values of the spin-density
map, and $\vec{r}$ is a vector containing values of the relaxivity map.
For a base resolution of $n \times n$ pixels, both vectors have $n^2$
entries. Further, $\vec{y}_{t,c}$ is a vector containing the raw data
from channel $c$ of all spokes measured at echo time $t$, where $c$ runs
from 1 to the total number of receive channels and $t$ runs over all
echo times. Finally, $\vec{F}$ is a vector function that calculates a
snapshot from the given spin-density and relaxivity map at echo time
$t$, and translates it to k-space using a Fourier transformation and
subsequent evaluation at the sampling positions of the spokes acquired
at time $t$. Moreover, before Fourier transformation, the mapping
includes a multiplication with the sensitivity profile of coil $c$.
These coil profiles are estimated from the same data set in a preceding
processing step using a sum-of-squares combination of the channels,
where it is assumed that the individual sensitivity profiles are smooth
functions. A detailed description of this procedure is given in
\cite{Blo2007}.
The function $\vec{F}$ can be seen as the forward operation of the
reconstruction problem and comprises a model of the received MRI signal,
which is used to synthesize the corresponding k-space signal from the
given maps. The $j$th entry, i.e. the $j$th sample of the synthesized
data with k-space position $\vec{k}_j$ at echo time $t$, is given by
\begin{equation}
\label{eq:signalfunc}
F_j(\vec{\rho},\vec{r},t,c)= \sum_{\vec{x}\in \textrm{FOV}} \varrho(\vec{x}) \cdot e^{- R(\vec{x}) \cdot t} \cdot C_c(\vec{x}) \cdot e^{-i \, \vec{x} \cdot \vec{k}_j } \, ,
\end{equation}
where $\vec{x}$ denotes a position in image space such that the sum runs
over all (discrete) elements of the image matrix, and $C_c$ is the
complex sensitivity profile of the $c$th coil. Further, $\varrho$
denotes a function which evaluates the spin-density vector $\vec{\rho}$
at image position $\vec{x}$
\begin{equation}
\label{eq:densfunc}
\varrho(\vec{x})=\sum_i \rho_i \cdot \delta (\vec{x}-\vec{x_i}) \;,
\end{equation}
where $\rho_i$ denotes the $i$th component of the spin-density vector
with corresponding position $\vec{x_i}$ in image space. Accordingly, the
function $R$ evaluates the relaxivity vector $\vec{r}$
\begin{equation}
\label{eq:relaxfunc}
R(\vec{x})=\sum_i r_i \cdot \delta (\vec{x}-\vec{x_i}) \;.
\end{equation}
To reconstruct a set of measured data $\vec{y}$, a pair of vectors
$(\vec{\rho},\vec{r})$ has to be found that minimizes the cost function
$\Phi$. This can be achieved by using a numerical optimization method
that is suited for large-scale problems, like the non-linear
conjugate-gradient (CG) method. Here, we employed the CG-Descent
algorithm recently presented by Hager and Zhang \cite{Hag2005}, which
can be used in a black box manner. Hence, it is only required to
evaluate $\Phi$ and its gradient at given positions
$(\vec{\rho},\vec{r})$ in the parameter space.
\subsection{Evaluation of Cost Function}
Evaluation of the cost function at a given pair $(\vec{\rho},\vec{r})$
can be done in a straightforward manner. However, to calculate the value
of $\Phi$ in a reasonable time, a practical strategy is to perform a
fast Fourier transformation (FFT) of the snapshots and to interpolate
the transforms onto the desired spoke positions in k-space using a
convolution with a radial Kaiser-Bessel kernel similar to the gridding
technique \cite{Sul1985,Jac1991}. Because the kernel is finite, it is
further necessary to pre-compensate for undesired intensity modulations
by multiplying the snapshots with an approximation of the kernel's
Fourier transform in front of the FFT -- commonly known as roll-off
correction \cite{Ras1999-2}.
The same strategy can be used for evaluating the gradient, which is a
vector containing the derivative of the cost function with respect to
each component of the spin-density and relaxivity vector. It is
convenient to decompose the problem into a separate derivation of $\Phi$
with respect to components of $\vec{\rho}$ and $\vec{r}$, respectively,
\begin{equation}
\label{eq:gradfunc}
\nabla \Phi= \left( \begin{array}{c} \nabla_\rho \Phi \\ \nabla_r \Phi \end{array} \right)\;.
\end{equation}
To simplify the notation, the calculation is only shown for a single
time point and a single coil (indicated by $\phi$ instead of $\Phi$)
\begin{equation}
\label{eq:simplfunc}
\phi = \frac{1}{2}\left\|\vec{F} - \vec{y}\right\|^2_2\;.
\end{equation}
Derivation of Eq.~(\ref{eq:signalfunc}) with respect to components
of $\vec{\rho}$ gives
\begin{equation}
\label{eq:fderiv}
\frac{\partial F_j}{\partial \rho_v}=e^{- R(\vec{x}_v) \cdot t} \cdot C_c(\vec{x}_v) \cdot e^{-i \, \vec{x}_v \cdot \vec{k}_j }\;,
\end{equation}
where $\vec{x}_v$ denotes the position of the $v$th component in image
space. Using the chain rule and inserting this equation then
yields (see Appendix) \begin{equation}
\label{eq:graddens}
\frac{\partial}{\partial \rho_v}\phi = e^{- R(\vec{x}_v) \cdot t} \cdot \Re \left\{ \overline{C_c(\vec{x}_v)} \cdot \sum_j \left( F_j - y_j \right) e^{i \, \vec{x}_v \cdot \vec{k}_j } \right\} .
\end{equation}
Hence, the gradient with respect to $\vec{\rho}$ can be obtained by
evaluating the cost function, calculating the residual, and performing
an inverse Fourier transformation, which is followed by a multiplication
with the complex conjugate of the coil profile and, finally, with the
relaxation term. Derivation of Eq.~(\ref{eq:signalfunc}) with
respect to components of $\vec{r}$ gives
\begin{equation}
\label{eq:derivrelax}
\frac{\partial F_j}{\partial r_v}= -t \cdot \varrho(\vec{x}_v) \cdot e^{- R(\vec{x}_v) \cdot t} \cdot C_c(\vec{x}_v) \cdot e^{-i \, \vec{x}_v \cdot \vec{k}_j }
\end{equation}
and, in a similar way to (\ref{eq:graddens}), this yields
\begin{eqnarray}
\label{eq:gradrelax}
\frac{\partial}{\partial r_v}\phi & = & -t \cdot \varrho(\vec{x}_v) \cdot e^{- R(\vec{x}_v) \cdot t} \cdot \Re \left\{
\overline{C_c(\vec{x}_v)} \cdot \sum_j \left( F_j - y_j \right) e^{i \, \vec{x}_v \cdot \vec{k}_j } \right\}.
\end{eqnarray}
Comparison with Eq.~(\ref{eq:graddens}) shows that the gradient with
respect to $\vec{r}$ can be easily obtained by multiplying the gradient
with respect to $\vec{\rho}$ with the components of the given
$\vec{\rho}$ and the echo time $t$. Of course, Eq.~(\ref{eq:graddens})
and Eq.~(\ref{eq:gradrelax}) have to be summed over each channel and
echo time occurring in the complete cost function~(\ref{eq:costfunc}).
\subsection{Regularization}
Because the aforementioned strategy for fast evaluation of the cost
function involves an interpolation step that is imperfect in practice,
and, further, because the measured values correspond to the continuous
Fourier transform whereas the estimated maps are discrete, it is
necessary to introduce a regularization of the estimate. If a
regularization is not used, implausible values arise after a certain
number of iterations in ill-conditioned areas of the estimate, i.e. in
entries of the maps that are not well-defined from the measured k-space
values. For instance, the edges of the maps' Fourier transforms outside
of the sampled disc are ill-conditioned due to a lack of measured
samples in these areas. Because the optimizer simply tries to reduce the
total value of the cost function, it employs such ``degrees of freedom''
to minimize the residuum at neighboring sampling positions (it should be
noted that the interpolation strategy assumes that the true function can
be locally approximated by a weighted sum over the nearby grid
points). This results in local k-space peaks with high amplitude that
translate into noise-alike image patterns for high iteration
numbers. The problem poses a general drawback of the iterative reconstruction
technique, which can be either adressed by stopping the optimization
procedure after some iterations or by regularizing the estimate to
suppress implausible values. Advantages of the latter approach are
that the estimate converges to a well-defined solution without being
sensitive to the total iteration number and that it allows for tuning
the solution by using different regularization techniques. A
regularization is accomplished by complementing the cost function with a
weighted penalty function $P(\vec{\rho},\vec{r})$ which can act on both
the spin-density and relaxivity map
\begin{eqnarray}
\label{eq:extcostfunc}
\Phi(\vec{\rho},\vec{r})&=&\frac{1}{2}\sum_t \sum_c \left\|\vec{F}(\vec{\rho},\vec{r},t,c) - \vec{y}_{t,c} \right\|^2_2 + \lambda \cdot P(\vec{\rho},\vec{r}) \;.
\end{eqnarray}
Different types of penalty functions can be used for the regularization.
For instance, constraining the total energy of the estimate by
penalizing the L2 norm of its components leads to the commonly known
(simple) Tikhonov regularization. In the present work, the L2 norms of
the finite differences of the maps' Fourier transforms are penalized,
which enforces certain smoothness of the k-space information
\begin{eqnarray}
\label{eq:regterm}
P(\vec{\rho},\vec{r})=\left\| D_x \mathcal{F} \, \vec{\rho} \right\|^2_2
+ \left\| D_y \mathcal{F} \, \vec{\rho} \right\|^2_2
+ \left\| D_x \mathcal{F} \, \vec{r} \right\|^2_2
+ \left\| D_y \mathcal{F} \, \vec{r} \right\|^2_2 \;.
\end{eqnarray}
Here, $\mathcal{F}$ denotes the discrete Fourier transformation, and $D_x$ is the
finite difference operator in x-direction
\begin{equation}
D_x \, \textrm{I}(x,y)=\textrm{I}(x,y)-\textrm{I}(x-1,y)\;.
\end{equation}
It turned out that this choice offers robust suppression of local intensity accumulations
in the ill-conditioned areas, so that no artifacts appear even for a high number of
iterations (e.g., 1000 iterations).
A drawback of any regularization technique is that it introduces a
weighting factor $\lambda$, which has to be chosen properly. If selected
too low, the regulariation becomes ineffective, whereas if selected too
high, the solution will be biased towards the minimization of the
penalty term (here leading to an undesired image modulation). A method for
automatically determining the proper regularization weight would be highly
desirable, but the establishment of such a method is yet an open issue.
However, with the penalty term described in Eq.~(\ref{eq:regterm}), the reconstruction
appears to be rather insensible to minor changes of the weighting factor $\lambda$.
In fact, all images presented here were obtained with a fixed value of
$\lambda=0.001$, which yielded an effective artifact suppression without
noticeably affecting the results. Presumably, the weight has to be adjusted for measured
data with completely different scaling. In this case, a reasonable adjustment strategy
is to start with a very small weight and to increase the value successively until the
artifacts visually disappear.
\subsection{Initialization}
In the single-echo reconstruction scenario, it is very efficient to
initialize the optimizer with a properly scaled gridding solution. This
choice significantly reduces the total number of iterations because the
optimizer starts with a reasonable guess. In the multi-echo case,
however, it is more difficult to obtain reasonable initial guesses for
the spin-density and relaxivity map, and several options exist. For
example, a curve fitting of either strongly undersampled or
low-resolution gridding solutions from single echo times could be used
to approximate the maps. Alternatively, an echo-sharing method like KWIC
could be employed. While preliminary analyses confirmed that these
strategies may lead to a certain acceleration of convergence, they also
indicated complications if the initial guesses contain implausible
values, for example, in relaxation maps which are obviously undefined in
areas with a void signal intensity. It is therefore necessary to remove
respective values from the initial guesses. The present work simply used
zero maps for the initialization, which require a higher number of
iterations but ensure a straightforward convergence to a reasonable
solution.
\subsection{Scaling and Snapshot Calculation}
Another factor with essential impact on the convergence rate is the
scaling of the time variable $t$. Although it intuitively makes sense to
directly use physical units, a proper rescaling of the time variable
significantly reduces the number of iterations.
Equation~(\ref{eq:gradrelax}) shows that the gradient with respect to
the relaxivity depends linearly on $t$, while this is not the case with
respect to the spin density. If the values of $t$ for the different
echoes are very small, then the cost function is much more sensitive to
changes in $\vec{\rho}$ and the problem is said to be \textit{poorly
scaled} \cite{Noc2006}. In contrast, large values of $t$ lead to a
dominant sensitivity to perturbations in $\vec{r}$. Because finding a
reasonable solution requires a matching of both maps at the same time,
the optimization procedure is especially effective when the scaling of
$t$ is selected such that there is a balanced influence on the cost
function. In our experience, a proper scaling, which depends on the
range of the object's spin-density and relaxivity values, allows for
reducing the number of required iterations from over 1000 iterations to
about only 80 iterations for a typical data set. Of course, a rescaling
of $t$ is accompanied by a corresponding scaling of the relaxivity
values in $\vec{r}$, which can be corrected afterwards to allow for
quantitative analyses. Noteworthy, the sensitivity to scaling is a
property of the specific optimization method used here (the non-linear
CG method) and not related to the reconstruction concept itself. Hence,
exchanging the CG method by an optimization technique that is scale
invariant (like the Newton's method) might render a rescaling
unnecessary but is accompanied by other disadvantages like calculation
of the Hessian \cite{Noc2006}.
Finally, after complete estimation of the spin-density map $\vec{\rho}$
and relaxivity map $\vec{r}$, snapshot images can be calculated for an
arbitrary echo time with
\begin{equation}
\label{eq:snapshot}
I_t(\vec{x})=\varrho(\vec{x}) \cdot e^{- R(\vec{x}) \cdot t} \;.
\end{equation}
These images do not contain any additional information, but present the
estimated temporal information in a more familiar view.
\section{Methods}
\subsection{Data Acquisition}
For evaluation of the proposed reconstruction technique, simulated data
and experimental MRI data was used. The simulated data was created with
a numerical phantom which is composed of superimposed ellipses so that
the analytical Fourier transform of the phantom can be deduced from the
well-known Fourier transform of a circle. The numerical phantom mimics a
water phantom consisting of three compartments with different T2
relaxation times (200~ms, 100~ms, and 50~ms), which are surrounded by a
compartment with a relaxation time comparable to that of pure water
(1000~ms). All experiments were conducted at 2.9~T (Siemens Magnetom
TIM Trio, Erlangen, Germany) using a receive only 12-channel head coil
in circularly polarized (CP) mode, yielding four channels with different
combinations of the coils. Measurements were performed for a water
phantom doped with $\textrm{MnCl}_2$ as well as the human brain in vivo,
where written informed consent was obtained in all cases prior to each
examination.
The simulated data and the experimental phantom data was acquired with a
base resolution of 160 pixels covering a FOV of 120~mm (bandwidth
568~Hz/pixel), while human brain data was acquired with a base
resolution of 224 pixels covering a FOV of 208~mm (bandwidth
360~Hz/pixel). A train of 16 spin echoes with an echo spacing of 10~ms
was recorded after a slice-selective 90$^\circ$ excitation pulse
(section thickness 3~mm). The spin echoes were refocused using a
conventional 180$^\circ$ RF pulse, enclosed by crusher gradients to
dephase spurious FID signals. The total number of spokes per data set
ranged from 128 to 512 spokes, which were acquired using 8 to 32
excitations and measured with a repetition time of TR~=~7000~ms to avoid
saturation effects of the CSF. The "angular bisection" view-ordering
scheme was used as described by Song and Dougherty \cite{Son2000}, which
ensures that spokes measured at consecutive echo times have a maximum
angular distance. Noteworthy, this scheme is not required by the
proposed method, but it was employed to permit reconstructions with the
KWIC approach for comparison. Further, the sampling direction of every
second repetition was altered in such a way as to generate opposing
neighboring spokes. The procedure yields more tolerable artifacts in the
presence of off-resonances. Fat suppression was accomplished by a
preceding CHESS pulse, and an isotropic compensation mechanism was
applied to avoid gradient timing errors and corresponding smearing
artifacts due to misalignment of the data in k-space \cite{Spe2006}. For
comparison, a fully-sampled Cartesian data set of the human brain was
acquired with a multi-echo spin-echo sequence from the manufacturer
(base resolution 192~pixels, section thickness 4~mm, 16 echoes, echo
spacing 10ms, TR~=~7000~ms). In this case, k-space was fully sampled at
all 16 echo times so that the acquisition time was about 22.4 minutes
for a single slice. Moreover, simulations with different degrees of
undersampling as well as simulations with added Gaussian noise were
performed for appraising the achievable reconstruction accuracy.
\subsection{Reconstruction}
All data processing was done offline using an in-house software package
written in C/C++. In a first step, phase offsets were removed by
aligning the phase of all spokes at the center of k-space. Coil
sensitivity profiles were estimated from the data set using the
procedure described in \cite{Blo2007}. In addition, a thresholding mask
was obtained from the smoothed sum-of-squares image, so that areas with
void signal intensity could be set to zero by applying the mask to all
reconstructed images. For the interpolation in k-space from grid to
spokes and vice versa, a Kaiser-Bessel window with $L=6$,
$\beta=13.8551$ and twofold oversampling was used \cite{Bea2005}. To
speed up the iterations, the interpolation coefficients were
precalculated and stored in a look-up table. The optimizer for
estimating the spin-density and relaxivity map was run for a fixed
number of 200 iterations to ensure that the estimate has converged,
although fewer iterations are usually sufficient for finding an accurate
solution. Hence, an automatic stopping criterion would be highly
desirable for routine use to avoid unnecessary computation time. The
scaling of the time variable was chosen heuristically such that $t=300
\cdot n$ for the phantom study and $t=150 \cdot n$ for human brain data,
where $n$ is the echo number. Because this factor is object-dependent
and has significant impact on the optimization efficiency, it would be
highly desirable to employ an automatic mechanism for adjusting the
scaling. Such development, however, is outside the scope of this work.
For comparison, gridding reconstructions of the spokes measured at each
echo time were calculated using the same interpolation kernel. Here,
the estimated coil sensitivity profiles were used to combine the
different channels instead of taking a sum-of-squares. Further,
time-resolved reconstructions employing the KWIC method were calculated.
In the initial KWIC work \cite{Son2000}, only 8 instead of 16 echoes were
acquired per excitation. Therefore, we implemented two variants: either
high frequency information from all spokes was used to fill the outer
k-space area (kwic 16), or (in a sliding window manner) information from
only the 8 neighboring echo times was shared (kwic 8). Apart form that,
our implementation followed the basic KWIC approach described in \cite{Son2000}.
To allow for a fair comparison, the same
interpolation kernel was used, and coil profiles were employed for
channel combination. Finally, spin-density and relaxivity maps were
estimated from the images by a pixelwise curve fitting using the
Levenberg-Marquardt algorithm.
\section{Results}
\subsection{Experimental Data}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Figure1.eps}
\caption{
Spin-density maps (top) and relaxivity maps (bottom) estimated
for a phantom data set (base resolution 160~pixels, FOV 120~mm,
bandwidth 568~Hz/pixel bandwidth) using the proposed iterative method
(iter), KWIC combining all 16 echoes (kwic 16), and KWIC combining only
8 neighboring echoes (kwic 8). The images were acquired with a radial
FSE sequence using 32 repetitions and 16 echoes each, yielding a total
of 512 spokes. PD = proton density, R2 = T2 relaxivity.
}
\label{fig_phantom}
\end{figure}
Figure ~\ref{fig_phantom} compares spin-density and relaxivity maps for
a phantom containing five water-filled tubes with different
concentrations of $\textrm{MnCl}_2$, i.e. different T2 relaxation times,
which were estimated using the proposed method, the KWIC method sharing
all echoes, and the KWIC method sharing 8 neighboring echoes. It can be
seen that the sharing of k-space data in the KWIC reconstructions leads
to ring-like artifacts inside the tubes with fast T2 relaxation, in line
with the findings of Altbach et al \cite{Alt2005}. The artifacts are
more pronounced in the KWIC variant sharing all echoes, while the
variant sharing only 8 echoes suffers from streaking artifacts due to
incomplete coverage of the outer k-space. Such artifacts do not appear
in the iteratively estimated maps. Here, the spin-density of the tube
with the shortest relaxation time is slightly underestimated, which is
probably caused by a higher amount of noise due to fast signal decay.
Further, because the relaxivity is undefined in areas with a void spin
density, the relaxivity maps are affected by spurious values outside of
the tubes in all cases. It should be noted that this effect is limited
to a narrow surrounding of the object due to the application of a
thresholding mask. In general, these spurious values are distinguishable
from the object by a lack of intensity in either the spin-density map or
the gridding image from all spokes.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Figure2.eps}
\caption{
Spin-density maps (top) and relaxivity maps (bottom) estimated
for a transverse section of the human brain in vivo (base resolution
224~pixels, FOV 208~mm, bandwidth 360~Hz/pixel) using the proposed
method (iter), KWIC combining all 16 echoes (kwic 16), and KWIC
combining only 8 neighboring echoes (kwic 8). Other parameters as in
Fig.~\ref{fig_phantom}. For comparison, maps from a fully-sampled
Cartesian reference data set are shown in the right column (cart). The
bottom row shows magnifications of the relaxivity maps.
}
\label{fig_brain}
\end{figure}
Figure \ref{fig_brain} shows corresponding reconstructions for a
transverse section of the human brain in vivo. Again, the KWIC
reconstruction using 8 echoes suffers from streaking artifacts, while
the maps involving all echoes appear fuzzy and blurry. In the latter case,
the spin-density map is further contaminated by sharp hyperintense
structures. This results from padding the high frequencies
with data from late echoes which introduces components with T2 weighting
and poses a general problem when sharing data with varying contrast.
The iteratively calculated maps
present without these artifacts. For comparison, maps from a
fully-sampled Cartesian data set are presented, which show good
agreement with the maps obtained from the iterative approach.
Noteworthy, because the slice thickness was higher in the Cartesian
acquisition, these maps show a slightly larger part of the frontal
ventricles, which, however, is not related to the reconstruction
technique.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Figure3.eps}
\caption{
Snapshot reconstructions of the human brain (same data as
Fig.~\ref{fig_brain}) at the time of the first (ec1), 6th (ec6), and
last echo (ec16) using the proposed method (iter), KWIC combining 8
neighboring echoes (kwic 8), and direct gridding of the single echoes
(regr). The right column shows images from a fully-sampled Cartesian
data set (cart).
}
\label{fig_brainsnaps}
\end{figure}
For the same radial data set, Fig.~\ref{fig_brainsnaps} compares
snapshots of the first, 6th, and last echo reconstructed using the
proposed method with Eq.~(\ref{eq:snapshot}), direct gridding, and KWIC
with sharing of 8 echoes. Corresponding images from the Cartesian data
set are shown as reference. The contrast of the Cartesian and gridding
images can be taken as ground truth due to the equal echo time of all
k-space data used. It can be seen that the snapshots calculated with
the proposed method show good match to the contrast of the Cartesian
gold standard while they are not affected by streaking artifacts.
\begin{figure}[t!]
\centering
\includegraphics{Figure4.eps}
\caption{
Spin-density maps (PD) and relaxivity maps (R2) estimated using the
iterative approach for a transverse section of the human brain in vivo
from (left) 512 spokes acquired with 32 repetitions, (middle) 256 spokes acquired
with 16 repetitions, and (right) 128 spokes acquired with 8 repetitions
(16 echoes each, base resolution 224~pixels, FOV 208~mm, bandwidth 360~Hz/pixel).
The bottom row shows relaxivity maps obtained from the same data using the KWIC
approach (kwic8).
}
\label{fig_brainunders}
\end{figure}
Finally, Fig.~\ref{fig_brainunders} shows iterative reconstructions of
the human brain from radial data with different degrees of
undersampling, ranging from a total of 512 spokes (32 repetitions) to
only 128 spokes (8 repetitions). As expected, the data reduction is
accompanied by some loss of image quality, but even for 128 spokes
the iterative approach still offers a relatively good separation of
proton density and relaxivity. For comparison, relaxivity maps obtained
by the KWIC approach are shown in the bottom row, and it can be seen
that the image quality breaks down for higher degrees of undersampling.
\subsection{Simulated Data}
\begin{figure}[t!]
\centering
\includegraphics{Figure5.eps}
\caption{
Relaxivity maps estimated from simulated data with (top) 4032 spokes
from 252 repetitions (all echoes fully sampled), (middle) 512 spokes from
32 repetitions, and (bottom) 128 spokes from 8 repetitions using (iter)
the proposed iterative approach, (kwic8) the KWIC approach sharing 8 neigboring
echoes, and (regr) direct gridding of the single echoes (16 echoes each,
base resolution 160~pixels). The numerical phantom consists of compartments with
relaxations times of T2=200~ms, T2=100~ms, and T2=50~ms which are
surrounded by a compartment with T2=1000~ms.
}
\label{fig_accur}
\end{figure}
Figure \ref{fig_accur} shows relaxivity maps from simulated data
estimated with the proposed method, the KWIC method, and direct gridding
(the proton density maps are not shown as the proton density was set to
a constant value within the phantom). As in the prior figures, an
identical windowing function has been used for all relaxivity maps
within the figure, so that absolute relaxivity values were equally
mapped to grayscale values. The upper row corresponds to a fully sampled
data set (4032 spokes, 252 repetitions), and, hence, the number of spokes
for each echo time complies
with the Nyquist theorem in the sense that the angular distance between
neighboring spokes is less or equal to $\Delta k=1/\textrm{FOV}$. In
this case, all three approaches yield maps without any streaking
artifacts. However, the maps created by KWIC and gridding present with
somewhat stronger Gibbs ringing artifacts, which are especially visible
within the smaller compartments. The middle row corresponds to the
degree of undersampling that was used in the experiments presented in
Fig.~\ref{fig_phantom}~--~Fig.~\ref{fig_brainsnaps} (512 spokes, 32 repetitions).
Here, streaking artifacts appear for KWIC and gridding, whereas the iterative
reconstruction is free from these artifacts. The bottom row shows maps
for a high degree of undersampling, corresponding to the highest
undersampling factor presented in Fig.~\ref{fig_brainunders} (128 spokes, 8 repetitions).
For such an undersampling, minor streaking artifacts arise also in the iterative
reconstruction, while the KWIC and gridding reconstructions exhibit
severe streaking artifacts. Noteworthy, only a single receive channel
was generated in the simulations, and, thus, it clarifies that the
proposed method offers an improvement also without exploiting localized
coil sensitivities, which is implicitly done for experimental multi-coil
data due to the better conditioning of the problem. Table
\ref{tab_relaxivity} summarizes a region-of-interest (ROI) analysis of
the relaxivity maps from Fig.~\ref{fig_accur}, where identical regions
were analyzed in all maps. In all cases, the iterative approach
estimates the signal relaxivity with higher accuracy than gridding or
KWIC. Interestingly, even in the fully-sampled case, significant
deviations occur for the KWIC and gridding reconstruction. These
deviations result from the strong ringing effects that are apperent in
Fig.~\ref{fig_accur}, which appear to be more pronounced for radial
acquisitions than for Cartesian acquisitions. Hence, the strong signal
from the surrounding compartment smears into the quickly decaying
compartments, which causes a bias of the signal intensity in the
time-resolved images. The iterative approach seems to better cope with
this situation.
\begin{figure}[t!]
\centering
\includegraphics{Figure6.eps}
\caption{
Relaxivity maps estimated from simulated data (top) without
noise, (middle) with a low level of Gaussian noise, and (bottom) with a
high level of Gaussian noise using (iter) the proposed iterative
approach, (kwic8) the KWIC approach sharing 8 neigboring echoes, and
(regr) direct gridding of the single echoes (512 spokes from 32 repetitions,
16 echoes each, base resolution 160~pixels). Relaxation times of the numerical
phantom as in Fig.~\ref{fig_accur}.
}
\label{fig_noise}
\end{figure}
\begin{table}[t!]
\makebox[\textwidth][c]{
\begin{tabular}{ll|cccc}
\hline
Noise level & & True T2 & Iterative & KWIC8 & Gridding \\
\hline\hline
\textit{none} & Compartment 1 & 50~ms & $50.2 \pm 0.1$~ms & $60.1 \pm 0.3$~ms & $59.8 \pm 0.3$~ms \\
& Compartment 2 & 100~ms & $100.0 \pm 0.2$~ms & $110.3 \pm 0.7$~ms & $109.8 \pm 0.7$~ms \\
& Compartment 3 & 200~ms & $199.9 \pm 0.6$~ms & $210.9 \pm 2.1$~ms & $210.5 \pm 2.2$~ms \\
& Surrounding & 1000~ms & $996.5 \pm 11.9$~ms & $922.3 \pm 32.9$~ms & $930.4 \pm 37.8$~ms \\
\hline
\textit{low} & Compartment 1 & 50~ms & $50.0 \pm 0.3$~ms & $59.5 \pm 0.6$~ms & $58.5 \pm 0.8$~ms \\
& Compartment 2 & 100~ms & $99.0 \pm 0.7$~ms & $109.7 \pm 1.1$~ms & $108.8 \pm 1.2$~ms \\
& Compartment 3 & 200~ms & $200.5 \pm 1.8$~ms & $211.9 \pm 2.8$~ms & $210.9 \pm 3.2$~ms \\
& Surrounding & 1000~ms & $1012.7 \pm 43.9$~ms & $901.8 \pm 43.3$~ms & $938.6 \pm 60.6$~ms \\
\hline
\textit{high} & Compartment 1 & 50~ms & $43.9 \pm 0.9$~ms & $48.6 \pm 4.2$~ms & $15.2 \pm 3.9$~ms \\
& Compartment 2 & 100~ms & $76.6 \pm 2.3$~ms & $76.4 \pm 5.9$~ms & $28.9 \pm 10.2$~ms \\
& Compartment 3 & 200~ms & $156.7 \pm 6.0$~ms & $155.2 \pm 10.0$~ms & $38.7 \pm 17.9$~ms \\
& Surrounding & 1000~ms & $733.4 \pm 121.9$~ms & $257.9 \pm 73.5$~ms & $143.6 \pm 228.4$~ms \\
\hline
\end{tabular}
}
\caption{ROI analysis of the relaxivity maps shown in Fig.~\ref{fig_noise}.}
\label{tab_noise}
\end{table}
Finally, Fig.~\ref{fig_noise} shows relaxivity maps estimated from
simulated data with different degrees of added Gaussian noise (512 spokes, 32
repetitions). It demonstrates that the proposed approach
works stable also under noisy conditions, which is somewhat obvious as
the matching of the maps to the measured data is done in a least-squares
way. The maps estimated with KWIC and gridding present with a slightly
higher noise level. The results of a ROI analysis of the maps are
summarized in Table \ref{tab_noise}.
\section{Discussion}
As demonstrated, existing reconstruction methods for radial FSE that
share data from different echo times always require a trade-off between
the accuracy of the image contrast and an undersampling of the outer
k-space, which results in streaking artifacts. The main advantage of the
proposed method is that the use of a signal model allows for the
combination of spokes measured at different times. Thus, it exploits all
sampled data without having to assume that contrast changes are limited
to some central part of k-space.
\subsection{Computational Load}
A disadvantage of the present implementation, however,
is a significantly higher computational requirement than for
conventional non-iterative methods. Because an individual Fourier
transformation with subsequent gridding is required for each echo time
and receiver channel, a single evaluation of the cost function for the
examples analyzed here involves 64 FFT and gridding steps.
Evaluating the gradient requires even twice the number of operations,
and one full iteration of the algorithm often needs several
evaluations of the cost function and gradient.
However, many of the operations can be performed in parallel. In the
proof-of-principle implementation, we used the OpenMP interface to
parallelize the calculations for different echo times. Hence, the
evaluation of the cost function and gradient is executed on different
cores at the same time. Using a system equipped with two Intel Xeon
E5345 quad core processors running at 2.33 GHz, the 200 iterations took
about two minutes per slice (excluding the calculation of the look-up
table, which takes additional 8 seconds on a single core). Despite
foreseeable progress in multi-core processor technology which promises
significant acceleration, a use of the method in near future is likely
to be limited to applications where delayed reconstructions are tolerable.
However, because preliminary gridding reconstructions could be calculated
to provide immediate feedback to the operator, this limitation might be
secondary in clinical practice.
\subsection{Accuracy}
\begin{table}[t!]
\makebox[\textwidth][c]{
\begin{tabular}{ll|cccc}
\hline
Data set & & True T2 & Iterative & KWIC8 & Gridding \\
\hline\hline
\textit{4032 spokes} & Compartment 1 & 50~ms & $49.9 \pm 0.1$~ms & $59.7 \pm 0.2$~ms & $59.8 \pm 0.2$~ms \\
& Compartment 2 & 100~ms & $100.0 \pm 0.1$~ms & $109.9 \pm 0.2$~ms & $109.9 \pm 0.2$~ms \\
& Compartment 3 & 200~ms & $199.9 \pm 0.4$~ms & $209.9 \pm 0.2$~ms & $209.9 \pm 0.2$~ms \\
& Surrounding & 1000~ms & $1001.0 \pm 4.7$~ms & $930.6 \pm 1.0$~ms & $930.6 \pm 1.0$~ms \\
\hline
\textit{512 spokes} & Compartment 1 & 50~ms & $50.2 \pm 0.1$~ms & $60.1 \pm 0.3$~ms & $59.8 \pm 0.3$~ms \\
& Compartment 2 & 100~ms & $100.0 \pm 0.2$~ms & $110.3 \pm 0.7$~ms & $109.8 \pm 0.7$~ms \\
& Compartment 3 & 200~ms & $199.9 \pm 0.6$~ms & $210.9 \pm 2.1$~ms & $210.5 \pm 2.2$~ms \\
& Surrounding & 1000~ms & $996.5 \pm 11.9$~ms & $922.3 \pm 32.9$~ms & $930.4 \pm 37.8$~ms \\
\hline
\textit{128 spokes} & Compartment 1 & 50~ms & $49.1 \pm 0.1$~ms & $57.6 \pm 0.3$~ms & $60.0 \pm 0.5$~ms \\
& Compartment 2 & 100~ms & $98.8 \pm 0.2$~ms & $105.7 \pm 1.1$~ms & $108.9 \pm 1.3$~ms \\
& Compartment 3 & 200~ms & $197.1 \pm 0.7$~ms & $205.4 \pm 3.5$~ms & $213.7 \pm 4.1$~ms \\
& Surrounding & 1000~ms & $1032.3 \pm 14.0$~ms & $901.7 \pm 41.5$~ms & $1073.8 \pm 106.5$~ms \\
\hline
\end{tabular}
}
\caption{ROI analysis of the relaxivity maps shown in Fig.~\ref{fig_accur}.}
\label{tab_relaxivity}
\end{table}
From a theoretical point of view, the proposed method should make
optimal use of all data measured and, thus, deliver a high accuracy,
which is confirmed by the results listed in Table \ref{tab_relaxivity}
for a simulated phantom. Because the solution is found in a least-squares
sense, this should also hold true for data contaminated by noise as verified
in Fig.~\ref{fig_noise}. In practice, however, there are a number of
factors that might affect the achievable experimental accuracy.
First, the procedure used to determine the coil sensitivities is simple
and might introduce a bias due to inappropriate characterization of the
profiles. In particular, the procedure fails in areas with no or very
low signal intensity, so that routine applications will probably require
a more sophisticated procedure. Second, the Fourier transform of the
object, as encoded by the MRI signal, is non-compact. Therefore, any
finite sampling is incomplete, which makes it impossible to invert the
spatial encoding exactly. Consequently, truncation artifacts arise when
employing a discrete Fourier transformation (DFT), which present as a
convolution of the object with a sinc function. Because DFTs are used to
compare the snapshots to the measured data and, further, because the
truncation artifacts are different for each echo time, this effect might
interfere with the estimation of a solution that is fully consistent
with all measured data. In particular, ringing patterns around
high-intensity spots might lead to a bias of surrounding pixels that
possibly diverts the decay estimated in these areas. Noteworthy, this
effect is an inherent problem of any MRI technique and not limited to
the proposed method, as evident from the deviations occuring in the
gridding reconstructions in Table \ref{tab_relaxivity}.
Finally, if the relaxation process is so fast that the signal decay is
insufficiently captured by the acquired echo train, inaccurate
spin-density and relaxivity values will be estimated. For example, if a
signal intensity above noise level is received only at the first echo
time, the algorithm will probably assume a too low spin-density and a
too low relaxivity, which would likewise describe the observed signal
intensities in a least-squares sense. However, this is a general problem
of any T2 estimation technique and can only be overcome by a finer
temporal sampling. Also, inaccuracies that might occur when the actual
relaxation process differs from a pure mono-exponential decay are not
limited to the present method. In fact, any T2 estimation technique has
to employ a simplified signal model at some stage for quantifying the
relaxation and, consequently, deviations from an assumed exponential
signal decay always impair the estimation accuracy.
\subsection{Extensions}
Although focused on the reconstruction of FSE data, the method can be
used for multi-echo data from other sequences as well. Depending on the
contrast mechanism of the individual sequence, it might be necessary to
adapt the signal model (\ref{eq:signalfunc}). Further, for non-refocused
multi-echo sequences the data can be significantly affected by
off-resonance effects due to the pronounced sensitivity of radial
trajectories. In this case, it might be possible to map also the
off-resonances by replacing the relaxivity with a complex-valued
parameter and adjusting the gradient of the cost function. However, due
to the extended parameter space it is expected that this strategy will
be only successful if suitable constraints for the estimates are
incorporated. This can be achieved by extending the cost function
Eq.~(\ref{eq:extcostfunc}) by additional penalty terms that imply
certain prior knowledge about the solution. For example, if it can be
assumed that the object is piecewise-constant to some degree or
has a sparse transform representation, it is reasonable to penalize the total
variation of the maps \cite{Blo2007} or to apply a constraint on the transform
representation \cite{Lus2007}. Further, it might be beneficial to penalize relaxation times that
obviously exceed the range of plausible values, e.g negative relaxation
times or relaxation times greater than the repetition time TR.
Moreover, the reconstruction concept is not only applicable to data with
different contrast due to spin relaxation or saturation, but can be
adapted to completely different imaging situations as well. In this
regard, the current work demonstrates the feasibility of extending the
inverse reconstruction scheme to more complex imaging problems that
require a non-linear processing. A prerequisite for the application to
other problems, however, is that a simple analytical signal model,
comparable to Eq.~(\ref{eq:signalfunc}), can be formulated. Further, it
is required that the derivative of the signal model with respect to all
components of the parameter space can be calculated, and that the model
allows for a relatively fast evaluation of the cost function and its
gradient.
\section{Conclusion}
This work presents a new concept for iterative reconstruction from
radial multi-echo data with a main focus on fast spin-echo acquisitions.
Instead of sharing k-space data with different echo times to approximate
time-resolved images, the proposed method employs a signal model to
account for the time dependency of the data and directly estimates a
spin-density and relaxivity map. Because the approach involves a
numerical optimization for finding a solution, it exploits all data
sampled and allows for an efficient T2 quantification from a single
radial data set. In comparison with Cartesian quantification techniques,
such data can be acquired in a shorter time and with less motion
sensitivity. The method is computationally intensive and presently
limited to applications where a delayed reconstruction is acceptable.
\newpage
\section*{Appendix}
The (simplified) cost function $\phi$ defined in
Eq.~(\ref{eq:simplfunc}) can be written as
\begin{eqnarray}
\phi &=& \frac{1}{2}\left\|\vec{F} - \vec{y}\right\|^2_2
=\frac{1}{2}\sum_j \left( F_j - y_j \right) \overline{\left( F_j - y_j \right)} \nonumber \\
&=& \frac{1}{2}\sum_j F_j \overline{F_j} + y_j \overline{y_j} - y_j \overline{F_j} - \overline{y_j} F_j\;,
\end{eqnarray}
where $\overline{(\ \cdot\ )}$ denotes the complex conjugate.
The derivative of this function with respect to any component $u$ of the estimate
vector is obtained using the chain rule
\begin{eqnarray}
\frac{\partial}{\partial u}\phi &=&
\frac{1}{2}\sum_j F_j \frac{\partial}{\partial u} \overline{F_j}+
\overline{F_j} \frac{\partial}{\partial u} F_j - \overline{y_j} \frac{\partial}{\partial u} F_j - y_j \frac{\partial}{\partial u} \overline{F_j} \nonumber \\
&=& \frac{1}{2}\sum_j \left( F_j - y_j \right) \frac{\partial}{\partial u} \overline{F_j}+ \left( \overline{F_j} - \overline{y_j} \right) \frac{\partial}{\partial u} F_j \nonumber \\
&=& \frac{1}{2}\sum_j \left( F_j - y_j \right) \frac{\partial}{\partial u} \overline{F_j}+ \frac{1}{2} \overline{\sum_j \left( F_j - y_j \right) \frac{\partial}{\partial u} \overline{F_j}} \nonumber \\
&=& \Re \left\{ \sum_j \left( F_j - y_j \right) \frac{\partial}{\partial u} \overline{F_j} \right\}\;.
\end{eqnarray}
Inserting Eq.~(\ref{eq:fderiv}) then yields the derivative of the cost
function with respect to a component of the spin-density map $\rho_v$
\begin{eqnarray}
\frac{\partial}{\partial \rho_v}\phi &=&
\Re \left\{ \sum_j \left( F_j - y_j \right) \frac{\partial}{\partial \rho_v} \overline{F_j} \right\} \nonumber \\
&=& \Re \left\{ \sum_j \left( F_j - y_j \right) e^{- R(\vec{x}_v) \cdot t} \cdot \overline{C_c(\vec{x}_v)} \cdot e^{i \, \vec{x}_v \cdot \vec{k}_j } \right\} \nonumber \\
&=& e^{- R(\vec{x}_v) \cdot t} \cdot \Re \left\{ \overline{C_c(\vec{x}_v)} \cdot \sum_j \left( F_j - y_j \right) e^{i \, \vec{x}_v \cdot \vec{k}_j } \right\}\;.
\end{eqnarray}
The derivative with respect to components of the relaxivity map can be
obtained accordingly.
\newpage | {"config": "arxiv", "file": "1603.00040/radTSE2.tex"} |
:: Affine Independence in Vector Spaces
:: by Karol P\c{a}k
environ
vocabularies ALGSTR_0, ARYTM_1, ARYTM_3, XBOOLE_0, CARD_1, CONVEX1, CONVEX2,
CONVEX3, FINSEQ_1, FINSEQ_2, FINSEQ_4, FINSET_1, FUNCT_1, FUNCT_2,
MONOID_0, ORDERS_1, RELAT_1, RLSUB_1, RLVECT_1, RLVECT_2, RLVECT_3,
RUSUB_4, SEMI_AF1, SETFAM_1, SUBSET_1, TARSKI, CLASSES1, SUPINF_2,
RLVECT_5, REAL_1, NUMBERS, NAT_1, CARD_3, XXREAL_0, STRUCT_0, ZFMISC_1,
RLAFFIN1, ORDINAL1, ORDINAL4, PARTFUN1, FUNCT_7;
notations TARSKI, XBOOLE_0, SUBSET_1, ORDINAL1, ORDERS_1, CARD_1, NUMBERS,
XXREAL_0, XCMPLX_0, XREAL_0, REAL_1, FINSET_1, SETFAM_1, DOMAIN_1,
ALGSTR_0, RELAT_1, FUNCT_1, PARTFUN1, FUNCT_2, FINSEQ_1, STRUCT_0,
FINSEQ_2, FINSEQ_3, FINSEQOP, CLASSES1, RVSUM_1, RLVECT_1, RLVECT_2,
RLVECT_3, RLVECT_5, RLSUB_1, RLSUB_2, RUSUB_4, CONVEX1, CONVEX2, CONVEX3;
constructors BINOP_1, BINOP_2, CLASSES1, CONVEX1, CONVEX3, FINSEQOP, FINSOP_1,
MATRLIN, ORDERS_1, REALSET1, REAL_1, RLVECT_3, RLVECT_5, RVSUM_1,
RLSUB_2, RUSUB_5, SETWISEO, RELSET_1;
registrations BINOP_2, CARD_1, CONVEX1, FINSET_1, FINSEQ_2, FUNCT_1, FUNCT_2,
NAT_1, NUMBERS, RELAT_1, RLVECT_1, RLVECT_3, RUSUB_4, STRUCT_0, SUBSET_1,
VALUED_0, XCMPLX_0, XREAL_0, XXREAL_0, RLVECT_5, RELSET_1, RLVECT_2,
ORDINAL1;
requirements REAL, NUMERALS, SUBSET, BOOLE, ARITHM;
definitions CONVEX1, RLVECT_3, RUSUB_4, TARSKI, XBOOLE_0;
equalities CONVEX1, FINSEQ_1, RLVECT_2, RUSUB_4, RVSUM_1, STRUCT_0, SUBSET_1,
XBOOLE_0;
expansions CONVEX1, FINSEQ_1, RLVECT_2, RLVECT_3, RUSUB_4, STRUCT_0, TARSKI,
XBOOLE_0;
theorems CARD_2, CONVEX1, CONVEX3, ENUMSET1, FINSEQ_1, FINSEQ_2, FINSEQ_3,
FINSEQ_4, FINSEQOP, FINSET_1, FINSOP_1, FUNCT_1, FUNCT_2, MATRPROB,
NAT_1, ORDERS_1, ORDINAL1, PARTFUN1, RELAT_1, RFINSEQ, RLTOPSP1,
RLVECT_1, RLVECT_2, RLVECT_3, RLVECT_4, RLVECT_5, RLSUB_1, RLSUB_2,
RUSUB_4, RVSUM_1, SETFAM_1, STIRL2_1, TARSKI, XBOOLE_0, XBOOLE_1,
XCMPLX_0, XCMPLX_1, XREAL_1, XXREAL_0, ZFMISC_1, CARD_1;
schemes FRAENKEL, FUNCT_1, FUNCT_2, NAT_1, TREES_2, XFAMILY;
begin :: Preliminaries
reserve x,y for set,
r,s for Real,
S for non empty addLoopStr,
LS,LS1,LS2 for Linear_Combination of S,
G for Abelian add-associative right_zeroed right_complementable
non empty addLoopStr,
LG,LG1,LG2 for Linear_Combination of G,
g,h for Element of G,
RLS for non empty RLSStruct,
R for vector-distributive scalar-distributive scalar-associative
scalar-unitalnon empty RLSStruct,
AR for Subset of R,
LR,LR1,LR2 for Linear_Combination of R,
V for RealLinearSpace,
v,v1,v2,w,p for VECTOR of V,
A,B for Subset of V,
F1,F2 for Subset-Family of V,
L,L1,L2 for Linear_Combination of V;
registration
let RLS;
let A be empty Subset of RLS;
cluster conv A -> empty;
coherence
proof
A=the empty convex Subset of RLS;
then A in Convex-Family A by CONVEX1:def 4;
hence thesis by SETFAM_1:4;
end;
end;
Lm1: for A be Subset of RLS holds A c=conv A
proof
let A be Subset of RLS;
let x be object;
assume A1: x in A;
A2: now let y;
assume y in Convex-Family A;
then A c=y by CONVEX1:def 4;
hence x in y by A1;
end;
[#]RLS is convex;
then [#]RLS in Convex-Family A by CONVEX1:def 4;
hence thesis by A2,SETFAM_1:def 1;
end;
registration let RLS;
let A be non empty Subset of RLS;
cluster conv A -> non empty;
coherence
proof
A c=conv A by Lm1;
hence thesis;
end;
end;
theorem
for v be Element of R holds conv {v} = {v}
proof
let v be Element of R;
{v} is convex
proof
let u1,u2 be VECTOR of R,r;
assume that
0<r and
r<1;
assume that
A1: u1 in {v} and
A2: u2 in {v};
u1=v & u2=v by A1,A2,TARSKI:def 1;
then r*u1+(1-r)*u2=(r+(1-r))*u1 by RLVECT_1:def 6
.=u1 by RLVECT_1:def 8;
hence thesis by A1;
end;
then conv{v}c={v} by CONVEX1:30;
hence thesis by ZFMISC_1:33;
end;
theorem
for A be Subset of RLS holds A c= conv A by Lm1;
theorem Th3:
for A,B be Subset of RLS st A c= B holds conv A c= conv B
proof
let A,B be Subset of RLS such that
A1: A c=B;
A2: Convex-Family B c=Convex-Family A
proof
let x be object;
assume A3: x in Convex-Family B;
then reconsider X=x as Subset of RLS;
B c=X by A3,CONVEX1:def 4;
then A4: A c=X by A1;
X is convex by A3,CONVEX1:def 4;
hence thesis by A4,CONVEX1:def 4;
end;
[#]RLS is convex;
then [#]RLS in Convex-Family B by CONVEX1:def 4;
then A5: meet(Convex-Family A)c=meet(Convex-Family B) by A2,SETFAM_1:6;
let y be object;
assume y in conv A;
hence thesis by A5;
end;
theorem
for S,A be Subset of RLS st A c= conv S holds conv S = conv(S\/A)
proof
let S,A be Subset of RLS such that
A1: A c=conv S;
thus conv S c=conv(S\/A) by Th3,XBOOLE_1:7;
S c=conv S by Lm1;
then S\/A c=conv S by A1,XBOOLE_1:8;
hence thesis by CONVEX1:30;
end;
Lm2: for V be add-associative right_zeroed right_complementable
non empty addLoopStr
for A,B be Subset of V for v be Element of V holds
(v+A)\(v+B)=v+(A\B)
proof
let V be add-associative right_zeroed right_complementable
non empty addLoopStr;
let A,B be Subset of V;
let v be Element of V;
hereby let x be object;
assume A1: x in (v+A)\(v+B);
then x in v+A by XBOOLE_0:def 5;
then consider w be Element of V such that
A2: x=v+w and
A3: w in A;
not x in v+B by A1,XBOOLE_0:def 5;
then not w in B by A2;
then w in A\B by A3,XBOOLE_0:def 5;
hence x in v+(A\B) by A2;
end;
let x be object;
assume x in v+(A\B);
then consider w be Element of V such that
A4: x=v+w and
A5: w in A\B;
A6: not x in v+B
proof
assume x in v+B;
then consider s be Element of V such that
A7: x=v+s and
A8: s in B;
s=w by A4,A7,RLVECT_1:8;
hence thesis by A5,A8,XBOOLE_0:def 5;
end;
w in A by A5,XBOOLE_0:def 5;
then x in v+A by A4;
hence thesis by A6,XBOOLE_0:def 5;
end;
Lm3: for v,w be Element of S holds{v+w}=v+{w}
proof
let p,q be Element of S;
hereby let x be object;
assume x in {p+q};
then A1: x=p+q by TARSKI:def 1;
q in {q} by TARSKI:def 1;
hence x in p+{q} by A1;
end;
let x be object;
assume x in p+{q};
then consider v be Element of S such that
A2: x=p+v and
A3: v in {q};
v=q by A3,TARSKI:def 1;
hence thesis by A2,TARSKI:def 1;
end;
theorem Th5:
for V be add-associative non empty addLoopStr
for A be Subset of V for v,w be Element of V
holds (v+w)+A = v+(w+A)
proof
let V be add-associative non empty addLoopStr;
let A be Subset of V;
let v,w be Element of V;
set vw=v+w;
thus vw+A c=v+(w+A)
proof
let x be object;
assume x in vw+A;
then consider s be Element of V such that
A1: x=vw+s & s in A;
w+s in w+A & x=v+(w+s) by A1,RLVECT_1:def 3;
hence thesis;
end;
let x be object;
assume x in v+(w+A);
then consider s be Element of V such that
A2: x=v+s and
A3: s in w+A;
consider r be Element of V such that
A4: s=w+r and
A5: r in A by A3;
x=vw+r by A2,A4,RLVECT_1:def 3;
hence thesis by A5;
end;
theorem Th6:
for V be Abelian right_zeroed non empty addLoopStr for A be Subset of V
holds 0.V + A = A
proof
let V be Abelian right_zeroed non empty addLoopStr;
let A be Subset of V;
thus 0.V+A c=A
proof
let x be object;
assume x in 0.V+A;
then ex s be Element of V st x=0.V+s & s in A;
hence thesis by RLVECT_1:def 4;
end;
let x be object;
assume A1: x in A;
then reconsider v=x as Element of V;
0.V+v=v by RLVECT_1:def 4;
hence thesis by A1;
end;
Lm4: for V be non empty addLoopStr
for A be Subset of V for v be Element of V holds card (v+A) c= card A
proof
let V be non empty addLoopStr;
let A be Subset of V;
let v be Element of V;
deffunc F(Element of V)=v+$1;
card {F(w) where w is Element of V:w in A} c= card A from TREES_2:sch 2;
hence thesis;
end;
theorem Th7:
for A be Subset of G holds card A = card (g+A)
proof
let A be Subset of G;
-g+(g+A) = (-g+g)+A by Th5
.= 0.G+A by RLVECT_1:5
.= A by Th6;
then A1: card A c= card(g+A) by Lm4;
card(g+A) c= card A by Lm4;
hence thesis by A1;
end;
theorem Th8:
for v be Element of S holds v + {}S = {}S
proof
let v be Element of S;
assume v+{}S<>{}S;
then consider x being object such that
A1: x in v+{}S by XBOOLE_0:def 1;
ex w be Element of S st x=v+w & w in {}S by A1;
hence contradiction;
end;
theorem Th9:
for A,B be Subset of RLS st A c= B holds r*A c= r*B
proof
let A,B be Subset of RLS such that
A1: A c=B;
let x be object;
assume x in r*A;
then ex v be Element of RLS st x=r*v & v in A;
hence thesis by A1;
end;
theorem Th10:
(r*s)* AR = r * (s*AR)
proof
set rs=r*s;
hereby let x be object;
assume x in rs*AR;
then consider v be Element of R such that
A1: x=rs*v & v in AR;
s*v in s*AR & x=r*(s*v) by A1,RLVECT_1:def 7;
hence x in r*(s*AR);
end;
let x be object;
assume x in r*(s*AR);
then consider v be Element of R such that
A2: x=r*v and
A3: v in s*AR;
consider w be Element of R such that
A4: v=s*w and
A5: w in AR by A3;
rs*w=x by A2,A4,RLVECT_1:def 7;
hence thesis by A5;
end;
theorem Th11:
1 * AR = AR
proof
hereby let x be object;
assume x in 1*AR;
then ex v be Element of R st x=1*v & v in AR;
hence x in AR by RLVECT_1:def 8;
end;
let x be object such that
A1: x in AR;
reconsider v=x as Element of R by A1;
x=1*v by RLVECT_1:def 8;
hence thesis by A1;
end;
theorem Th12:
0 * A c= {0.V}
proof
let x be object;
assume x in 0*A;
then ex v be Element of V st x=0*v & v in A;
then x=0.V by RLVECT_1:10;
hence thesis by TARSKI:def 1;
end;
theorem Th13:
for F be FinSequence of S holds (LS1+LS2) * F = (LS1*F) + (LS2*F)
proof
let p be FinSequence of S;
A1: len(LS1*p)=len p by FINSEQ_2:33;
A2: len(LS2*p)=len p by FINSEQ_2:33;
then reconsider L1p=LS1*p,L2p=LS2*p as Element of len p-tuples_on REAL
by A1,FINSEQ_2:92;
A3: len((LS1+LS2)*p)=len p by FINSEQ_2:33;
A4: now let k be Nat;
assume A5: 1<=k & k<=len p;
then k in dom((LS1+LS2)*p) by A3,FINSEQ_3:25;
then A6: ((LS1+LS2)*p).k=(LS1+LS2).(p.k) by FUNCT_1:12;
k in dom L1p by A1,A5,FINSEQ_3:25;
then A7: p.k in dom LS1 & L1p.k=LS1.(p.k) by FUNCT_1:11,12;
k in dom L2p by A2,A5,FINSEQ_3:25;
then A8: L2p.k=LS2.(p.k) by FUNCT_1:12;
dom LS1=the carrier of S by FUNCT_2:def 1;
hence ((LS1+LS2)*p).k = L1p.k+L2p.k by A6,A8,A7,RLVECT_2:def 10
.= (L1p+L2p).k by RVSUM_1:11;
end;
len(L1p+L2p)=len p by CARD_1:def 7;
hence thesis by A3,A4;
end;
theorem Th14:
for F be FinSequence of V holds (r*L) * F = r * (L*F)
proof
let p be FinSequence of V;
A1: len((r*L)*p)=len p by FINSEQ_2:33;
A2: len(L*p)=len p by FINSEQ_2:33;
then reconsider rLp=(r*L)*p,Lp=L*p as Element of len p-tuples_on REAL
by A1,FINSEQ_2:92;
A3: now let k be Nat;
assume A4: 1<=k & k<=len p;
then k in dom Lp by A2,FINSEQ_3:25;
then A5: Lp.k=L.(p.k) & p.k in dom L by FUNCT_1:11,12;
k in dom rLp by A1,A4,FINSEQ_3:25;
then A6: rLp.k=(r*L).(p.k) by FUNCT_1:12;
(r*Lp).k=r*(Lp.k) & dom L=the carrier of V
by FUNCT_2:def 1,RVSUM_1:44;
hence rLp.k=(r*Lp).k by A5,A6,RLVECT_2:def 11;
end;
len(r*Lp)=len p by CARD_1:def 7;
hence thesis by A1,A3;
end;
theorem Th15:
A is linearly-independent & A c= B & Lin B = V implies
ex I be linearly-independent Subset of V st A c= I & I c= B & Lin I = V
proof
assume that
A1: A is linearly-independent & A c=B and
A2: Lin B =V;
defpred P[set] means
(ex I be Subset of V st I=$1 & A c=I & I c=B & I is linearly-independent);
consider Q be set such that
A3: for Z be set holds Z in Q iff Z in bool(the carrier of V) & P[Z]
from XFAMILY:sch 1;
A4: now let Z be set;
assume that
A5: Z<>{} and
A6: Z c=Q and
A7: Z is c=-linear;
set W=union Z;
W c=the carrier of V
proof
let x be object;
assume x in W;
then consider X be set such that
A8: x in X and
A9: X in Z by TARSKI:def 4;
X in bool(the carrier of V) by A3,A6,A9;
hence thesis by A8;
end;
then reconsider W as Subset of V;
set y=the Element of Z;
y in Q by A5,A6;
then A10: ex I being Subset of V st I=y & A c=I & I c=B & I is
linearly-independent by A3;
A11: W is linearly-independent
proof
deffunc F(object)={D where D is Subset of V:$1 in D & D in Z};
let l be Linear_Combination of W;
assume that
A12: Sum(l)=0.V and
A13: Carrier(l)<>{};
consider f being Function such that
A14: dom f=Carrier(l) and
A15: for x be object st x in Carrier(l) holds f.x=F(x)
from FUNCT_1:sch 3;
reconsider M=rng f as non empty set by A13,A14,RELAT_1:42;
A16: now assume{} in M;
then consider x be object such that
A17: x in dom f and
A18: f.x={} by FUNCT_1:def 3;
Carrier(l)c=W by RLVECT_2:def 6;
then consider X be set such that
A19: x in X and
A20: X in Z by A14,A17,TARSKI:def 4;
reconsider X as Subset of V by A3,A6,A20;
X in {D where D is Subset of V:x in D & D in Z} by A19,A20;
hence contradiction by A14,A15,A17,A18;
end;
set F = the Choice_Function of M;
set S=rng F;
A21: F is Function of M, union M by A16,ORDERS_1:90;
the Element of M in M;
then A22: union M<>{} by A16,ORDERS_1:6;
then A23: dom F=M by FUNCT_2:def 1,A21;
A24: now let X be set;
assume X in S;
then consider x be object such that
A25: x in dom F and
A26: F.x=X by FUNCT_1:def 3;
reconsider x as set by TARSKI:1;
consider y be object such that
A27: y in dom f & f.y=x by A23,A25,FUNCT_1:def 3;
A28: x={D where D is Subset of V:y in D & D in Z} by A14,A15,A27;
X in x by A16,A23,A25,A26,ORDERS_1:89;
then ex D be Subset of V st D=X & y in D & D in Z by A28;
hence X in Z;
end;
A29: now let X,Y be set;
assume X in S & Y in S;
then X in Z & Y in Z by A24;
then X,Y are_c=-comparable by A7,ORDINAL1:def 8;
hence X c=Y or Y c=X;
end;
dom F is finite by A14,A23,FINSET_1:8;
then S is finite by FINSET_1:8;
then union S in S by A22,A29,CARD_2:62,A21;
then union S in Z by A24;
then consider I be Subset of V such that
A30: I=union S and
A c=I and
I c=B and
A31: I is linearly-independent by A3,A6;
Carrier(l) c= union S
proof
let x be object;
assume A32: x in Carrier(l);
then A33: f.x={D where D is Subset of V:x in D & D in Z} by A15;
A34: f.x in M by A14,A32,FUNCT_1:def 3;
then F.(f.x) in f.x by A16,ORDERS_1:89;
then A35: ex D be Subset of V st F.(f.x)=D & x in D & D in Z
by A33;
F.(f.x) in S by A23,A34,FUNCT_1:def 3;
hence thesis by A35,TARSKI:def 4;
end;
then l is Linear_Combination of I by A30,RLVECT_2:def 6;
hence thesis by A12,A13,A31;
end;
A36: W c=B
proof
let x be object;
assume x in W;
then consider X be set such that
A37: x in X and
A38: X in Z by TARSKI:def 4;
ex I be Subset of V st I=X & A c=I & I c=B &
I is linearly-independent by A3,A6,A38;
hence thesis by A37;
end;
y c=W by A5,ZFMISC_1:74;
then A c=W by A10;
hence union Z in Q by A3,A11,A36;
end;
Q<>{} by A1,A3;
then consider X be set such that
A39: X in Q and
A40: for Z be set st Z in Q & Z<>X holds not X c=Z by A4,ORDERS_1:67;
consider I be Subset of V such that
A41: I=X and
A42: A c=I and
A43: I c=B and
A44: I is linearly-independent by A3,A39;
reconsider I as linearly-independent Subset of V by A44;
take I;
thus A c=I & I c=B by A42,A43;
assume A45: Lin(I)<>V;
now assume A46: for v st v in B holds v in Lin(I);
now let v;
assume v in Lin(B);
then consider l be Linear_Combination of B such that
A47: v = Sum(l) by RLVECT_3:14;
Carrier(l) c= the carrier of Lin(I)
proof
let x be object;
assume A48: x in Carrier(l);
then reconsider a=x as VECTOR of V;
Carrier(l)c=B by RLVECT_2:def 6;
then a in Lin(I) by A46,A48;
hence thesis;
end;
then reconsider l as Linear_Combination of Up Lin I
by RLVECT_2:def 6;
Sum(l)=v by A47;
then v in Lin(Up Lin I) by RLVECT_3:14;
hence v in Lin(I) by RLVECT_3:18;
end;
then Lin(B) is Subspace of Lin(I) by RLSUB_1:29;
hence contradiction by A2,A45,RLSUB_1:26;
end;
then consider v such that
A49: v in B and
A50: not v in Lin(I);
A51: not v in I by A50,RLVECT_3:15;
{v}c=B by A49,ZFMISC_1:31;
then A52: I\/{v}c=B by A43,XBOOLE_1:8;
v in {v} by TARSKI:def 1;
then A53: v in I\/{v} by XBOOLE_0:def 3;
A54: I\/{v} is linearly-independent
proof
let l be Linear_Combination of I\/{v};
assume A55: Sum(l)=0.V;
per cases;
suppose v in Carrier(l);
then A56: -l.v<>0 by RLVECT_2:19;
deffunc G(VECTOR of V)=In(0,REAL);
deffunc F(VECTOR of V)=l.$1;
consider f be Function of the carrier of V,REAL such that
A57: f.v=In(0,REAL) and
A58: for u be Element of V st u<>v holds f.u=F(u)
from FUNCT_2:sch 6;
reconsider f as Element of Funcs(the carrier of V,REAL)
by FUNCT_2:8;
now let u be Element of V;
assume not u in Carrier(l)\{v};
then not u in Carrier(l) or u in {v} by XBOOLE_0:def 5;
then l.u=0 & u<>v or u=v by TARSKI:def 1;
hence f.u=0 by A57,A58;
end;
then reconsider f as Linear_Combination of V by RLVECT_2:def 3;
Carrier(f) c= I
proof
let x be object;
assume x in Carrier(f);
then consider u be Element of V such that
A59: u=x and
A60: f.u<>0;
f.u=l.u by A57,A58,A60;
then Carrier(l)c=I\/{v} & u in Carrier(l) by A60,RLVECT_2:def 6;
then u in I or u in {v} by XBOOLE_0:def 3;
hence thesis by A57,A59,A60,TARSKI:def 1;
end;
then reconsider f as Linear_Combination of I by RLVECT_2:def 6;
consider g be Function of the carrier of V,REAL such that
A61: g.v=-l.v and
A62: for u be Element of V st u<>v holds g.u=G(u)
from FUNCT_2:sch 6;
reconsider g as Element of Funcs(the carrier of V,REAL)
by FUNCT_2:8;
now let u be Element of V;
assume not u in {v};
then u<>v by TARSKI:def 1;
hence g.u=0 by A62;
end;
then reconsider g as Linear_Combination of V by RLVECT_2:def 3;
Carrier(g)c={v}
proof
let x be object;
assume x in Carrier(g);
then ex u be Element of V st x=u & g.u<>0;
then x=v by A62;
hence thesis by TARSKI:def 1;
end;
then reconsider g as Linear_Combination of{v} by RLVECT_2:def 6;
A63: Sum(g)=(-l.v)*v by A61,RLVECT_2:32;
now let u be Element of V;
now per cases;
suppose A64: v=u;
thus(f-g).u=f.u-g.u by RLVECT_2:54
.=l.u by A57,A61,A64;
end;
suppose A65: v<>u;
thus(f-g).u=f.u-g.u by RLVECT_2:54
.=l.u-g.u by A58,A65
.=l.u-0 by A62,A65
.=l.u;
end;
end;
hence (f-g).u=l.u;
end;
then f-g=l;
then 0.V=Sum(f)-Sum(g) by A55,RLVECT_3:4;
then Sum(f)=0.V+Sum(g) by RLSUB_2:61
.=(-l.v)*v by A63;
then A66: (-l.v)*v in Lin(I) by RLVECT_3:14;
(-l.v)"*((-l.v)*v)=(-l.v)"*(-l.v)*v by RLVECT_1:def 7
.=1*v by A56,XCMPLX_0:def 7
.=v by RLVECT_1:def 8;
hence thesis by A50,A66,RLSUB_1:21;
end;
suppose A67: not v in Carrier(l);
Carrier(l)c=I
proof
let x be object;
assume A68: x in Carrier(l);
Carrier(l)c=I\/{v} by RLVECT_2:def 6;
then x in I or x in {v} by A68,XBOOLE_0:def 3;
hence thesis by A67,A68,TARSKI:def 1;
end;
then l is Linear_Combination of I by RLVECT_2:def 6;
hence thesis by A55,RLVECT_3:def 1;
end;
end;
A c=I\/{v} by A42,XBOOLE_1:10;
then I\/{v} in Q by A3,A52,A54;
hence contradiction by A40,A41,A51,A53,XBOOLE_1:7;
end;
begin :: Two Transformations of Linear Combinations
definition
let G,LG,g;
func g + LG -> Linear_Combination of G means :Def1:
it.h = LG.(h-g);
existence
proof
deffunc G(Element of G)=g+$1;
set vC={G(h):h in Carrier LG};
A1: vC c=the carrier of G
proof
let x be object;
assume x in vC;
then ex h st x=G(h) & h in Carrier LG;
hence thesis;
end;
A2: Carrier LG is finite;
vC is finite from FRAENKEL:sch 21(A2);
then reconsider vC as finite Subset of G by A1;
deffunc F(Element of G)=LG.($1-g);
consider f be Function of the carrier of G,REAL such that
A3: for h holds f.h=F(h) from FUNCT_2:sch 4;
reconsider f as Element of Funcs(the carrier of G,REAL) by FUNCT_2:8;
now let h;
assume A4: not h in vC;
assume f.h<>0;
then LG.(h-g)<>0 by A3;
then A5: h-g in Carrier LG;
g+(h-g) = (h+-g)+g by RLVECT_1:def 11
.= h+(g+-g) by RLVECT_1:def 3
.= h+0.G by RLVECT_1:def 10
.= h;
hence contradiction by A4,A5;
end;
then reconsider f as Linear_Combination of G by RLVECT_2:def 3;
take f;
thus thesis by A3;
end;
uniqueness
proof
let L1,L2 be Linear_Combination of G such that
A6: for h holds L1.h=LG.(h-g) and
A7: for h holds L2.h=LG.(h-g);
now let h;
thus L1.h = LG.(h-g) by A6
.= L2.h by A7;
end;
hence thesis;
end;
end;
theorem Th16:
Carrier (g+LG) = g + Carrier LG
proof
thus Carrier(g+LG)c=g+Carrier LG
proof
let x be object such that
A1: x in Carrier(g+LG);
reconsider w=x as Element of G by A1;
A2: (g+LG).w <>0 by A1,RLVECT_2:19;
A3: g+(w-g) = (w+-g)+g by RLVECT_1:def 11
.= w+(g+-g) by RLVECT_1:def 3
.= w+0.G by RLVECT_1:def 10
.= w;
(g+LG).w=LG.(w-g) by Def1;
then w-g in Carrier LG by A2;
hence thesis by A3;
end;
let x be object;
assume x in g+Carrier LG;
then consider w be Element of G such that
A4: x=g+w and
A5: w in Carrier LG;
g+w-g = g+w+-g by RLVECT_1:def 11
.= (-g+g)+w by RLVECT_1:def 3
.= 0.G+w by RLVECT_1:5
.= w;
then A6: (g+LG).(g+w)=LG.w by Def1;
LG.w<>0 by A5,RLVECT_2:19;
hence thesis by A4,A6;
end;
theorem Th17:
g + (LG1+LG2) = (g+LG1) + (g+LG2)
proof
now let h;
thus(g+(LG1+LG2)).h = (LG1+LG2).(h-g) by Def1
.= LG1.(h-g)+LG2.(h-g) by RLVECT_2:def 10
.=(g+LG1).h+LG2.(h-g) by Def1
.=(g+LG1).h+(g+LG2).h by Def1
.=((g+LG1)+(g+LG2)).h by RLVECT_2:def 10;
end;
hence thesis;
end;
theorem
v + (r*L) = r * (v+L)
proof
now let w;
thus(v+(r*L)).w = (r*L).(w-v) by Def1
.= r*(L.(w-v)) by RLVECT_2:def 11
.= r*((v+L).w) by Def1
.= (r*(v+L)).w by RLVECT_2:def 11;
end;
hence thesis;
end;
theorem Th19:
g + (h+LG) = (g+h) + LG
proof
now let s be Element of G;
thus(g+(h+LG)).s = (h+LG).(s-g) by Def1
.= LG.(s-g-h) by Def1
.= LG.(s-(g+h)) by RLVECT_1:27
.= ((g+h)+LG).s by Def1;
end;
hence thesis;
end;
theorem Th20:
g + ZeroLC G = ZeroLC G
proof
Carrier ZeroLC(G)={}G by RLVECT_2:def 5;
then {}G = g+Carrier ZeroLC(G) by Th8
.= Carrier(g+ZeroLC(G)) by Th16;
hence thesis by RLVECT_2:def 5;
end;
theorem Th21:
0.G + LG = LG
proof
now let g;
thus(0.G+LG).g = LG.(g-0.G) by Def1
.= LG.g;
end;
hence thesis;
end;
definition
let R,LR; let r be Real;
func r (*) LR -> Linear_Combination of R means :Def2:
for v be Element of R holds it.v = LR.(r"*v) if r<>0 otherwise
it = ZeroLC R;
existence
proof
now deffunc F(Element of R)=LR.(r"*$1);
deffunc G(Element of R)=r*$1;
consider f being Function of the carrier of R,REAL such that
A1: for v being Element of R holds f.v=F(v) from FUNCT_2:sch 4;
reconsider f as Element of Funcs(the carrier of R,REAL) by FUNCT_2:8;
assume A2: r<>0;
A3: now let v be Element of R;
assume A4: not v in r*Carrier LR;
A5: f.v=LR.(r"*v) by A1;
A6: r*(r"*v) = (r*r")*v by RLVECT_1:def 7
.= 1*v by A2,XCMPLX_0:def 7
.= v by RLVECT_1:def 8;
assume f.v<>0;
then r"*v in Carrier LR by A5;
hence contradiction by A4,A6;
end;
A7: Carrier LR is finite;
{G(w) where w is Element of R:w in Carrier LR} is finite
from FRAENKEL:sch 21(A7);
then reconsider f as Linear_Combination of R by A3,RLVECT_2:def 3;
take f;
let v be Element of R;
f.v=LR.(r"*v) by A1;
hence thesis by A1;
end;
hence thesis;
end;
uniqueness
proof
now let L1,L2 be Linear_Combination of R such that
A8: for v be Element of R holds L1.v=LR.(r"*v) and
A9: for v be Element of R holds L2.v=LR.(r"*v);
now let v be Element of R;
thus L1.v = LR.(r"*v) by A8
.= L2.v by A9;
end;
hence L1=L2;
end;
hence thesis;
end;
consistency;
end;
theorem Th22:
Carrier (r(*)LR) c= r*Carrier LR
proof
let x be object such that
A1: x in Carrier(r(*)LR);
reconsider v=x as Element of R by A1;
A2: (r(*)LR).v<>0 by A1,RLVECT_2:19;
(0 qua Real)(*)LR=ZeroLC(R) by Def2;
then A3: r<>0 by A1,RLVECT_2:def 5;
then (r(*)LR).v=LR.(r"*v) by Def2;
then A4: r"*v in Carrier LR by A2;
r*(r"*v) = (r*r")*v by RLVECT_1:def 7
.= 1*v by A3,XCMPLX_0:def 7
.= v by RLVECT_1:def 8;
hence thesis by A4;
end;
theorem Th23:
r <> 0 implies Carrier (r(*)LR) = r * Carrier LR
proof
assume A1: r<>0;
thus Carrier(r(*)LR)c=r*Carrier LR by Th22;
let x be object;
assume x in r*Carrier LR;
then consider v be Element of R such that
A2: x=r*v and
A3: v in Carrier LR;
r"*(r*v) = (r"*r)*v by RLVECT_1:def 7
.= 1*v by A1,XCMPLX_0:def 7
.= v by RLVECT_1:def 8;
then A4: LR.v=(r(*)LR).x by A1,A2,Def2;
LR.v<>0 by A3,RLVECT_2:19;
hence thesis by A2,A4;
end;
theorem Th24:
r (*) (LR1+LR2) = (r(*)LR1) + (r(*)LR2)
proof
per cases;
suppose A1: r = 0;
set Z=ZeroLC(R);
A2: now let v be Element of R;
thus (Z+Z).v = Z.v+Z.v by RLVECT_2:def 10
.= Z.v+0 by RLVECT_2:20
.= Z.v;
end;
thus r(*)(LR1+LR2) = Z by A1,Def2
.= Z+Z by A2
.= (r(*)LR1)+Z by A1,Def2
.= (r(*)LR1)+(r(*)LR2) by A1,Def2;
end;
suppose A3: r<>0;
now let v be Element of R;
thus(r(*)(LR1+LR2)).v = (LR1+LR2).(r"*v) by A3,Def2
.= LR1.(r"*v)+LR2.(r"*v) by RLVECT_2:def 10
.= (r(*)LR1).v+LR2.(r"*v) by A3,Def2
.= (r(*)LR1).v+(r(*)LR2).v by A3,Def2
.= ((r(*)LR1)+(r(*)LR2)).v by RLVECT_2:def 10;
end;
hence thesis;
end;
end;
theorem
r * (s(*)L) = s (*) (r*L)
proof
per cases;
suppose A1: s=0;
hence r*(s(*)L) = r*ZeroLC(V) by Def2
.= r*(0*L) by RLVECT_2:43
.= (r*0)*L by RLVECT_2:47
.= ZeroLC(V) by RLVECT_2:43
.= s(*)(r*L) by A1,Def2;
end;
suppose A2: s<>0;
now let v;
thus(r*(s(*)L)).v = r*((s(*)L).v) by RLVECT_2:def 11
.= r*(L.(s"*v)) by A2,Def2
.= (r*L).(s"*v) by RLVECT_2:def 11
.= (s(*)(r*L)).v by A2,Def2;
end;
hence thesis;
end;
end;
theorem Th26:
r (*) ZeroLC(R) = ZeroLC R
proof
per cases;
suppose r=0;
hence thesis by Def2;
end;
suppose A1: r<>0;
now let v be Element of R;
thus(r(*)ZeroLC(R)).v = ZeroLC(R).(r"*v) by A1,Def2
.= 0 by RLVECT_2:20
.= ZeroLC(R).v by RLVECT_2:20;
end;
hence thesis;
end;
end;
theorem Th27:
r(*)(s(*)LR)=(r*s)(*)LR
proof
per cases;
suppose A1: r=0 or s=0;
then (r*s)(*)LR=ZeroLC(R) by Def2;
hence thesis by A1,Def2,Th26;
end;
suppose A2: r<>0 & s<>0;
now let v be Element of R;
thus(r(*)(s(*)LR)).v = (s(*)LR).(r"*v) by A2,Def2
.= LR.(s"*(r"*v)) by A2,Def2
.= LR.((s"*r")*v) by RLVECT_1:def 7
.= LR.((r*s)"*v) by XCMPLX_1:204
.= ((r*s)(*)LR).v by A2,Def2;
end;
hence thesis;
end;
end;
theorem Th28:
1 (*) LR = LR
proof
now let v be Element of R;
thus(1(*)LR).v = LR.(1"*v) by Def2
.= LR.v by RLVECT_1:def 8;
end;
hence thesis;
end;
begin :: The Sum of Coefficients of a Linear Combination
definition
let S,LS;
func sum LS -> Real means :Def3:
ex F be FinSequence of S st
F is one-to-one & rng F = Carrier LS & it = Sum (LS*F);
existence
proof
consider p be FinSequence such that
A1: rng p=Carrier LS and
A2: p is one-to-one by FINSEQ_4:58;
reconsider p as FinSequence of S by A1,FINSEQ_1:def 4;
take Sum(LS*p);
thus thesis by A1,A2;
end;
uniqueness
proof
let S1,S2 be Real;
given F1 be FinSequence of S such that
A3: F1 is one-to-one and
A4: rng F1=Carrier LS and
A5: S1=Sum(LS*F1);
A6: dom(F1")=Carrier LS by A3,A4,FUNCT_1:33;
A7: len F1=card Carrier LS by A3,A4,FINSEQ_4:62;
given F2 be FinSequence of S such that
A8: F2 is one-to-one and
A9: rng F2=Carrier LS and
A10: S2=Sum(LS*F2);
set F12=(F1")*F2;
dom F2=Seg len F2 & len F2=card Carrier LS
by A8,A9,FINSEQ_1:def 3,FINSEQ_4:62;
then A11: dom F12=Seg len F1 by A6,A7,A9,RELAT_1:27;
dom F1=Seg len F1 by FINSEQ_1:def 3;
then rng(F1")=Seg len F1 by A3,FUNCT_1:33;
then A12: rng F12=Seg len F1 by A6,A9,RELAT_1:28;
then reconsider F12 as Function of Seg len F1,Seg len F1 by A11,FUNCT_2:1;
A13: F12 is onto by A12,FUNCT_2:def 3;
len(LS*F1)=len F1 by FINSEQ_2:33;
then dom(LS*F1)=Seg len F1 by FINSEQ_1:def 3;
then reconsider F12 as Permutation of dom(LS*F1) by A3,A8,A13;
LS*F1*F12 = LS*(F1*F12) by RELAT_1:36
.= LS*((F1*F1")*F2) by RELAT_1:36
.= LS*((id Carrier LS)*F2) by A3,A4,FUNCT_1:39
.= LS*F2 by A9,RELAT_1:53;
hence S1=S2 by A5,A10,FINSOP_1:7;
end;
end;
theorem Th29:
for F be FinSequence of S st Carrier LS misses rng F holds Sum (LS*F) = 0
proof
let F be FinSequence of S such that
A1: Carrier LS misses rng F;
set LF=LS*F;
set LF0=len LF|->(0 qua Real);
A2: now let k be Nat;
assume A3: 1<=k & k<=len LF;
A4: k in dom LF by A3,FINSEQ_3:25;
then k in dom F by FUNCT_1:11;
then F.k in rng F by FUNCT_1:def 3;
then A5: dom LS=the carrier of S & not F.k in Carrier LS
by A1,FUNCT_2:def 1,XBOOLE_0:3;
LF.k=LS.(F.k) & F.k in dom LS by A4,FUNCT_1:11,12;
hence LF.k=LF0.k by A5;
end;
len LF0=len LF by CARD_1:def 7;
then LF=LF0 by A2;
hence thesis by RVSUM_1:81;
end;
theorem Th30:
for F be FinSequence of S st F is one-to-one & Carrier LS c= rng F
holds sum LS = Sum (LS*F)
proof
let F be FinSequence of S such that
A1: F is one-to-one and
A2: Carrier LS c=rng F;
consider G be FinSequence of S such that
A3: G is one-to-one and
A4: rng G=Carrier LS and
A5: sum LS=Sum(LS*G) by Def3;
reconsider R=rng G as Subset of rng F by A2,A4;
reconsider FR=F-R,FR1=F-R` as FinSequence of S by FINSEQ_3:86;
consider p be Permutation of dom F such that
A6: F*p=(F-R`)^(F-R) by FINSEQ_3:115;
rng F\R` = R`` .= R;
then A7: rng FR1=R by FINSEQ_3:65;
FR1 is one-to-one by A1,FINSEQ_3:87;
then FR1,G are_fiberwise_equipotent by A3,A7,RFINSEQ:26;
then consider q be Permutation of dom G such that
A8: FR1=G*q by RFINSEQ:4;
len(LS*G)=len G by FINSEQ_2:33;
then A9: dom G=dom(LS*G) by FINSEQ_3:29;
(LS*G)*q=LS*FR1 by A8,RELAT_1:36;
then A10: sum LS=Sum(LS*FR1) by A5,A9,FINSOP_1:7;
len(LS*F)=len F by FINSEQ_2:33;
then A11: dom F=dom(LS*F) by FINSEQ_3:29;
rng F\R=rng FR by FINSEQ_3:65;
then rng FR misses Carrier LS by A4,XBOOLE_1:79;
then A12: LS*(FR1^FR)=(LS*FR1)^(LS*FR) & Sum(LS*FR)=0 by Th29,FINSEQOP:9;
(LS*F)*p=LS*(FR1^FR) by A6,RELAT_1:36;
hence Sum(LS*F) = Sum(LS*(FR1^FR)) by A11,FINSOP_1:7
.= sum LS+0 by A10,A12,RVSUM_1:75
.= sum LS;
end;
theorem Th31:
sum ZeroLC S = 0
proof
consider F be FinSequence of S such that
F is one-to-one and
A1: rng F=Carrier (ZeroLC S) and
A2: sum ZeroLC S = Sum((ZeroLC S)*F) by Def3;
F={} by A1,RLVECT_2:def 5;
hence thesis by A2,RVSUM_1:72;
end;
theorem Th32:
for v be Element of S st Carrier LS c= {v} holds sum LS = LS.v
proof
let v be Element of S;
consider p be FinSequence such that
A1: rng p={v} and
A2: p is one-to-one by FINSEQ_4:58;
reconsider p as FinSequence of S by A1,FINSEQ_1:def 4;
dom LS=the carrier of S & p=<*v*> by A1,A2,FINSEQ_3:97,FUNCT_2:def 1;
then A3: LS*p=<*LS.v*> by FINSEQ_2:34;
assume Carrier LS c={v};
hence sum LS = Sum(LS*p) by A1,A2,Th30
.= LS.v by A3,RVSUM_1:73;
end;
theorem
for v1,v2 be Element of S st
Carrier LS c= {v1,v2} & v1 <> v2 holds sum LS = LS.v1 + LS.v2
proof
let v1,v2 be Element of S;
consider p be FinSequence such that
A1: rng p={v1,v2} and
A2: p is one-to-one by FINSEQ_4:58;
reconsider p as FinSequence of S by A1,FINSEQ_1:def 4;
assume that
A3: Carrier LS c={v1,v2} and
A4: v1<>v2;
A5: dom LS=the carrier of S by FUNCT_2:def 1;
A6: Sum<*LS.v1*>=LS.v1 by RVSUM_1:73;
p=<*v1,v2*> or p=<*v2,v1*> by A1,A2,A4,FINSEQ_3:99;
then LS*p=<*LS.v1,LS.v2*> or LS*p=<*LS.v2,LS.v1*> by A5,FINSEQ_2:125;
then Sum(LS*p)=LS.v1+LS.v2 or Sum(LS*p)=LS.v2+LS.v1 by A6,RVSUM_1:74,76;
hence thesis by A1,A2,A3,Th30;
end;
theorem Th34:
sum (LS1+LS2) = sum LS1 + sum LS2
proof
set C1=Carrier LS1;
set C2=Carrier LS2;
consider p112 be FinSequence such that
A1: rng p112=C1\C2 and
A2: p112 is one-to-one by FINSEQ_4:58;
consider p12 be FinSequence such that
A3: rng p12=C1/\C2 and
A4: p12 is one-to-one by FINSEQ_4:58;
consider p211 be FinSequence such that
A5: rng p211=C2\C1 and
A6: p211 is one-to-one by FINSEQ_4:58;
reconsider p112,p12,p211 as FinSequence of S by A1,A3,A5,FINSEQ_1:def 4;
C1\C2 misses C1/\C2 by XBOOLE_1:89;
then A7: p112^p12 is one-to-one by A1,A2,A3,A4,FINSEQ_3:91;
set p1=p112^p12;
A8: rng p1 = C1\C2\/C1/\C2 by A1,A3,FINSEQ_1:31
.= C1 by XBOOLE_1:51;
then A9: rng(p112^p12^p211) = C1\/(C2\C1) by A5,FINSEQ_1:31
.= C1\/C2 by XBOOLE_1:39;
set p2=p12^p211;
A10: rng p2 = C1/\C2\/(C2\C1) by A3,A5,FINSEQ_1:31
.= C2 by XBOOLE_1:51;
set pp=p1^p211;
pp=p112^p2 by FINSEQ_1:32;
then A11: LS2*pp=(LS2*p112)^(LS2*p2) by FINSEQOP:9;
C2\C1 misses C1/\C2 by XBOOLE_1:89;
then A12: p12^p211 is one-to-one by A3,A4,A5,A6,FINSEQ_3:91;
C2 misses C1\C2 by XBOOLE_1:79;
then Sum(LS2*p112)=0 by A1,Th29;
then A13: Sum(LS2*pp) = 0+Sum(LS2*p2) by A11,RVSUM_1:75
.= sum LS2 by A10,A12,Def3;
len(LS1*pp)=len pp & len(LS2*pp)=len pp by FINSEQ_2:33;
then reconsider L1p=LS1*pp,L2p=LS2*pp as Element of len pp-tuples_on REAL
by FINSEQ_2:92;
A14: (LS1+LS2)*pp=L1p+L2p by Th13;
A15: C1 misses C2\C1 by XBOOLE_1:79;
then LS1*pp=(LS1*p1)^(LS1*p211) & Sum(LS1*p211)=0 by A5,Th29,FINSEQOP:9;
then A16: Sum(LS1*pp) = Sum(LS1*p1)+0 by RVSUM_1:75
.= sum LS1 by A7,A8,Def3;
A17: Carrier(LS1+LS2)c=C1\/C2
proof
let x be object;
assume x in Carrier(LS1+LS2);
then consider u be Element of S such that
A18: x=u and
A19: (LS1+LS2).u<>0;
(LS1+LS2).u=LS1.u+LS2.u by RLVECT_2:def 10;
then LS1.u<>0 or LS2.u<>0 by A19;
then x in C1 or x in C2 by A18;
hence thesis by XBOOLE_0:def 3;
end;
p112^p12^p211 is one-to-one by A5,A6,A7,A8,A15,FINSEQ_3:91;
hence sum(LS1+LS2) = Sum(L1p+L2p) by A9,A14,A17,Th30
.= sum LS1+sum LS2 by A13,A16,RVSUM_1:89;
end;
theorem Th35:
sum (r * L) = r * sum L
proof
consider F be FinSequence of V such that
A1: F is one-to-one and
A2: rng F=Carrier L and
A3: sum L=Sum(L*F) by Def3;
L is Linear_Combination of Carrier L by RLVECT_2:def 6;
then r*L is Linear_Combination of Carrier L by RLVECT_2:44;
then A4: Carrier(r*L)c=rng F by A2,RLVECT_2:def 6;
thus r*sum L = Sum(r*(L*F)) by A3,RVSUM_1:87
.= Sum((r*L)*F) by Th14
.= sum(r*L) by A1,A4,Th30;
end;
theorem Th36:
sum (L1-L2) =sum L1 - sum L2
proof
thus sum(L1-L2) = sum(L1)+sum((-1)*L2) by Th34
.= sum(L1)+(-1)*sum(L2) by Th35
.= sum(L1)-sum(L2);
end;
theorem Th37:
sum LG = sum (g+LG)
proof
set gL=g+LG;
deffunc F(Element of G)=$1+g;
consider f be Function of the carrier of G,the carrier of G such that
A1: for h holds f.h=F(h) from FUNCT_2:sch 4;
consider F be FinSequence of G such that
A2: F is one-to-one and
A3: rng F=Carrier LG and
A4: sum LG=Sum(LG*F) by Def3;
A5: len(f*F)=len F by FINSEQ_2:33;
A6: len(LG*F)=len F by FINSEQ_2:33;
A7: len(gL*(f*F))=len(f*F) by FINSEQ_2:33;
A8: now let k be Nat;
assume A9: 1<=k & k<=len F;
then k in dom F by FINSEQ_3:25;
then A10: F/.k=F.k by PARTFUN1:def 6;
k in dom(LG*F) by A6,A9,FINSEQ_3:25;
then A11: (LG*F).k=LG.(F.k) by FUNCT_1:12;
k in dom(gL*(f*F)) by A5,A7,A9,FINSEQ_3:25;
then A12: (gL*(f*F)).k=gL.((f*F).k) by FUNCT_1:12;
k in dom(f*F) by A5,A9,FINSEQ_3:25;
then (f*F).k=f.(F.k) by FUNCT_1:12;
then (f*F).k=F/.k+g by A1,A10;
hence (gL*(f*F)).k = LG.(F/.k+g-g) by A12,Def1
.= LG.(F/.k+g+-g) by RLVECT_1:def 11
.= LG.(F/.k+(g+-g)) by RLVECT_1:def 3
.= LG.(F/.k+0.G) by RLVECT_1:def 10
.= (LG*F).k by A10,A11;
end;
now let x1,x2 be object such that
A13: x1 in dom(f*F) and
A14: x2 in dom(f*F) and
A15: (f*F).x1=(f*F).x2;
A16: f.(F/.x1)=F/.x1+g by A1;
A17: x1 in dom F by A13,FUNCT_1:11;
then A18: F.x1=F/.x1 by PARTFUN1:def 6;
A19: x2 in dom F by A14,FUNCT_1:11;
then A20: F.x2=F/.x2 by PARTFUN1:def 6;
(f*F).x1=f.(F.x1) & (f*F).x2=f.(F.x2) by A13,A14,FUNCT_1:12;
then F/.x1+g=F/.x2+g by A1,A15,A16,A18,A20;
then F/.x1=F/.x2 by RLVECT_1:8;
hence x1=x2 by A2,A17,A18,A19,A20,FUNCT_1:def 4;
end;
then A21: f*F is one-to-one by FUNCT_1:def 4;
Carrier gL c=rng(f*F)
proof
let x be object;
assume x in Carrier gL;
then x in g+Carrier LG by Th16;
then consider h such that
A22: x=g+h and
A23: h in Carrier LG;
consider y being object such that
A24: y in dom F and
A25: F.y=h by A3,A23,FUNCT_1:def 3;
A26: f.(F.y)=x by A1,A22,A25;
dom f=the carrier of G by FUNCT_2:def 1;
then A27: y in dom(f*F) by A24,A25,FUNCT_1:11;
then (f*F).y in rng(f*F) by FUNCT_1:def 3;
hence thesis by A26,A27,FUNCT_1:12;
end;
then sum gL=Sum(gL*(f*F)) by A21,Th30;
hence thesis by A4,A5,A6,A7,A8,FINSEQ_1:14;
end;
theorem Th38:
r <> 0 implies sum LR = sum (r(*)LR)
proof
set rL=r(*)LR;
deffunc F(Element of R)=r*$1;
consider f be Function of the carrier of R,the carrier of R such that
A1: for v be Element of R holds f.v=F(v) from FUNCT_2:sch 4;
consider F be FinSequence of R such that
A2: F is one-to-one and
A3: rng F=Carrier LR and
A4: sum LR=Sum(LR*F) by Def3;
assume A5: r<>0;
now let x1,x2 be object such that
A6: x1 in dom(f*F) and
A7: x2 in dom(f*F) and
A8: (f*F).x1=(f*F).x2;
A9: f.(F/.x1)=r*F/.x1 by A1;
A10: x1 in dom F by A6,FUNCT_1:11;
then A11: F.x1=F/.x1 by PARTFUN1:def 6;
A12: x2 in dom F by A7,FUNCT_1:11;
then A13: F.x2=F/.x2 by PARTFUN1:def 6;
(f*F).x1=f.(F.x1) & (f*F).x2=f.(F.x2) by A6,A7,FUNCT_1:12;
then A14: r*F/.x1=r*F/.x2 by A1,A8,A9,A11,A13;
F/.x1 = 1*F/.x1 by RLVECT_1:def 8
.= (r"*r)*F/.x1 by A5,XCMPLX_0:def 7
.= r"*(r*F/.x2) by A14,RLVECT_1:def 7
.= (r"*r)*F/.x2 by RLVECT_1:def 7
.= 1*F/.x2 by A5,XCMPLX_0:def 7
.= F/.x2 by RLVECT_1:def 8;
hence x1 = x2 by A2,A10,A11,A12,A13,FUNCT_1:def 4;
end;
then A15: f*F is one-to-one by FUNCT_1:def 4;
A16: len(LR*F)=len F by FINSEQ_2:33;
A17: len(f*F)=len F by FINSEQ_2:33;
A18: len(rL*(f*F))=len(f*F) by FINSEQ_2:33;
now let k be Nat;
assume A19: 1<=k & k<=len F;
then k in dom F by FINSEQ_3:25;
then A20: F/.k=F.k by PARTFUN1:def 6;
k in dom(LR*F) by A16,A19,FINSEQ_3:25;
then A21: (LR*F).k=LR.(F.k) by FUNCT_1:12;
k in dom(f*F) by A17,A19,FINSEQ_3:25;
then A22: (f*F).k=f.(F.k) by FUNCT_1:12;
k in dom(rL*(f*F)) by A17,A18,A19,FINSEQ_3:25;
then (rL*(f*F)).k=rL.((f*F).k) by FUNCT_1:12;
hence (rL*(f*F)).k = rL.(r*F/.k) by A1,A20,A22
.= LR.(r"*(r*(F/.k))) by A5,Def2
.= LR.((r"*r)*F/.k) by RLVECT_1:def 7
.= LR.(1*F/.k) by A5,XCMPLX_0:def 7
.= (LR*F).k by A20,A21,RLVECT_1:def 8;
end;
then A23: LR*F=rL*(f*F) by A16,A17,A18;
Carrier rL c=rng(f*F)
proof
let x be object;
assume x in Carrier rL;
then x in r*Carrier LR by A5,Th23;
then consider v be Element of R such that
A24: x=r*v and
A25: v in Carrier LR;
consider y be object such that
A26: y in dom F and
A27: F.y=v by A3,A25,FUNCT_1:def 3;
A28: f.v=x by A1,A24;
A29: dom F=dom(f*F) by A17,FINSEQ_3:29;
then (f*F).y=f.v by A26,A27,FUNCT_1:12;
hence thesis by A26,A28,A29,FUNCT_1:def 3;
end;
hence thesis by A4,A15,A23,Th30;
end;
theorem Th39:
Sum (v + L) = (sum L)*v + Sum L
proof
defpred P[Nat] means
for L be Linear_Combination of V st card Carrier L<=$1 holds
Sum(v+L)=(sum L)*v+Sum L;
A1: for n be Nat st P[n] holds P[n+1]
proof
let n be Nat such that
A2: P[n];
let L be Linear_Combination of V such that
A3: card Carrier L<=n+1;
per cases by A3,NAT_1:8;
suppose card Carrier L<=n;
hence thesis by A2;
end;
suppose A4: card Carrier L=n+1;
then Carrier L is non empty;
then consider w be object such that
A5: w in Carrier L;
reconsider w as Element of V by A5;
A6: card(Carrier L\{w})=n by A4,A5,STIRL2_1:55;
consider Lw be Linear_Combination of{w} such that
A7: Lw.w=L.w by RLVECT_4:37;
set LLw=L-Lw;
LLw.w = L.w-Lw.w by RLVECT_2:54
.= 0 by A7;
then A8: not w in Carrier LLw by RLVECT_2:19;
A9: Carrier Lw c={w} by RLVECT_2:def 6;
then A10: Carrier LLw c=Carrier L\/Carrier Lw &
Carrier L\/Carrier Lw c= Carrier L\/{w} by RLVECT_2:55,XBOOLE_1:9;
Carrier L\/{w}=Carrier L by A5,ZFMISC_1:40;
then Carrier LLw c=Carrier L by A10;
then card Carrier LLw<=n by A8,A6,NAT_1:43,ZFMISC_1:34;
then A11: Sum(v+LLw)=(sum LLw)*v+Sum LLw by A2;
A12: LLw+Lw = L+(Lw-Lw) by RLVECT_2:40
.= L+ZeroLC(V) by RLVECT_2:57
.=L by RLVECT_2:41;
then A13: Sum L = Sum LLw+Sum Lw by RLVECT_3:1
.= Sum LLw+Lw.w*w by RLVECT_2:32;
v+Carrier Lw c= v+{w} by A9,RLTOPSP1:8;
then v+Carrier Lw c={v+w} by Lm3;
then Carrier(v+Lw)c={v+w} by Th16;
then v+Lw is Linear_Combination of{v+w} by RLVECT_2:def 6;
then A14: Sum(v+Lw) = (v+Lw).(v+w)*(v+w) by RLVECT_2:32
.= Lw.(v+w-v)*(v+w) by Def1
.= Lw.w*(v+w) by RLVECT_4:1;
A15: sum L = sum LLw+sum Lw by A12,Th34
.= sum LLw+Lw.w by A9,Th32;
v+L=(v+LLw)+(v+Lw) by A12,Th17;
hence Sum(v+L) = Sum(v+LLw)+Sum(v+Lw) by RLVECT_3:1
.= (sum LLw)*v+Sum LLw+(Lw.w*v+Lw.w*w)
by A11,A14,RLVECT_1:def 5
.= (sum LLw)*v+Sum LLw+Lw.w*v+Lw.w*w by RLVECT_1:def 3
.= (sum LLw)*v+Lw.w*v+Sum LLw+Lw.w*w by RLVECT_1:def 3
.= sum L*v+Sum LLw+Lw.w*w by A15,RLVECT_1:def 6
.= sum L*v+Sum L by A13,RLVECT_1:def 3;
end;
end;
A16: P[0 qua Nat]
proof
let L be Linear_Combination of V;
assume card Carrier L<=0;
then A17: Carrier L={}V;
then A18: L=ZeroLC(V) & Sum L=0.V by RLVECT_2:34,def 5;
v+Carrier L={} by A17,Th8;
then Carrier(v+L)={} by Th16;
hence Sum(v+L) = 0.V by RLVECT_2:34
.= 0.V+0.V
.= (sum L)*v+Sum L by A18,Th31,RLVECT_1:10;
end;
for n be Nat holds P[n] from NAT_1:sch 2(A16,A1);
then P[card Carrier L];
hence thesis;
end;
theorem Th40:
Sum (r(*)L) = r * Sum L
proof
defpred P[Nat] means
for L be Linear_Combination of V st card Carrier L<=$1 holds Sum(r(*)L)=r*
Sum L;
A1: for n be Nat st P[n] holds P[n+1]
proof
let n be Nat such that
A2: P[n];
let L be Linear_Combination of V such that
A3: card Carrier L<=n+1;
per cases;
suppose r=0;
then r(*)L=ZeroLC(V) & r*Sum L=0.V by Def2,RLVECT_1:10;
hence thesis by RLVECT_2:30;
end;
suppose A4: r<>0;
per cases by A3,NAT_1:8;
suppose card Carrier L<=n;
hence thesis by A2;
end;
suppose A5: card Carrier L=n+1;
then Carrier L<>{};
then consider p be object such that
A6: p in Carrier L by XBOOLE_0:def 1;
reconsider p as Element of V by A6;
A7: card(Carrier L\{p})=n by A5,A6,STIRL2_1:55;
consider Lp be Linear_Combination of{p} such that
A8: Lp.p=L.p by RLVECT_4:37;
set LLp=L-Lp;
LLp.p = L.p-Lp.p by RLVECT_2:54
.= 0 by A8;
then A9: not p in Carrier LLp by RLVECT_2:19;
A10: Carrier Lp c={p} by RLVECT_2:def 6;
then A11: Carrier LLp c=Carrier L\/Carrier Lp &
Carrier L\/Carrier Lp c= Carrier L\/{p} by RLVECT_2:55,XBOOLE_1:9;
r*Carrier Lp c={r*p}
proof
let x be object;
assume x in r*Carrier Lp;
then consider w be Element of V such that
A12: x=r*w and
A13: w in Carrier Lp;
w=p by A10,A13,TARSKI:def 1;
hence thesis by A12,TARSKI:def 1;
end;
then Carrier(r(*)Lp)c={r*p} by A4,Th23;
then r(*)Lp is Linear_Combination of{r*p} by RLVECT_2:def 6;
then A14: Sum(r(*)Lp)=(r(*)Lp).(r*p)*(r*p) by RLVECT_2:32
.=Lp.(r"*(r*p))*(r*p) by A4,Def2
.=Lp.((r"*r)*p)*(r*p) by RLVECT_1:def 7
.=Lp.(1*p)*(r*p) by A4,XCMPLX_0:def 7
.=Lp.p*(r*p) by RLVECT_1:def 8;
A15: LLp+Lp=L+(Lp-Lp) by RLVECT_2:40
.=L+ZeroLC(V) by RLVECT_2:57
.=L by RLVECT_2:41;
then A16: Sum L=Sum LLp+Sum Lp by RLVECT_3:1
.=Sum LLp+Lp.p*p by RLVECT_2:32;
Carrier L\/{p}=Carrier L by A6,ZFMISC_1:40;
then Carrier LLp c=Carrier L by A11;
then card Carrier LLp<=n by A9,A7,NAT_1:43,ZFMISC_1:34;
then A17: Sum(r(*)LLp)=r*Sum LLp by A2;
r(*)L=(r(*)LLp)+(r(*)Lp) by A15,Th24;
hence Sum(r(*)L)=Sum(r(*)LLp)+Sum(r(*)Lp) by RLVECT_3:1
.=r*Sum LLp+(Lp.p*r)*p by A14,A17,RLVECT_1:def 7
.=r*Sum LLp+r*(Lp.p*p) by RLVECT_1:def 7
.=r*Sum L by A16,RLVECT_1:def 5;
end;
end;
end;
A18: P[0 qua Nat]
proof
let L;
assume card Carrier L<=0;
then Carrier L={};
then A19: L=ZeroLC(V) by RLVECT_2:def 5;
then r*0.V=0.V & Sum L=0.V by RLVECT_2:30;
hence thesis by A19,Th26;
end;
for n be Nat holds P[n] from NAT_1:sch 2(A18,A1);
then P[card Carrier L];
hence thesis;
end;
begin :: Affine Independence of Vectors
definition
let V,A;
attr A is affinely-independent means
A is empty or ex v st v in A & -v + A\{0.V} is linearly-independent;
end;
registration
let V;
cluster empty -> affinely-independent for Subset of V;
coherence;
let v;
cluster {v} -> affinely-independent for Subset of V;
coherence
proof
{v} is affinely-independent
proof
assume{v} is non empty;
take v;
thus v in {v} by TARSKI:def 1;
-v+v=0.V by RLVECT_1:5;
then -v+{v}={0.V} by Lm3;
then -v+{v}\{0.V}={}(the carrier of V) by XBOOLE_1:37;
hence thesis by RLVECT_3:7;
end;
hence thesis;
end;
let w;
cluster {v,w} -> affinely-independent for Subset of V;
coherence
proof
per cases;
suppose v=w;
then {v,w}={w} by ENUMSET1:29;
hence thesis;
end;
suppose A1: v<>w;
-v+{v,w}c={-v+w,0.V}
proof
let x be object;
assume x in -v+{v,w};
then consider r be Element of V such that
A2: x=-v+r and
A3: r in {v,w};
per cases by A3,TARSKI:def 2;
suppose r=v;
then x=0.V by A2,RLVECT_1:5;
hence thesis by TARSKI:def 2;
end;
suppose r=w;
hence thesis by A2,TARSKI:def 2;
end;
end;
then A4: -v+{v,w}\{0.V}c={-v+w,0.V}\{0.V} by XBOOLE_1:33;
--v=v;
then A5: w+-v<>0.V by A1,RLVECT_1:6;
then A6: {-v+w} is linearly-independent by RLVECT_3:8;
{v,w} is affinely-independent
proof
assume{v,w} is non empty;
take v;
thus v in {v,w} by TARSKI:def 2;
0.V in {0.V} & not-v+w in {0.V} by A5,TARSKI:def 1;
then -v+{v,w}\{0.V}c={-v+w} by A4,ZFMISC_1:62;
hence thesis by A6,RLVECT_3:5;
end;
hence thesis;
end;
end;
end;
registration
let V;
cluster 1-element affinely-independent for Subset of V;
existence
proof
take {the Element of V};
thus thesis;
end;
end;
Lm5: for A st A is affinely-independent
for L be Linear_Combination of A st Sum L=0.V & sum L=0
holds Carrier L={}
proof
let A;
assume A1: A is affinely-independent;
let L be Linear_Combination of A such that
A2: Sum L=0.V and
A3: sum L=0;
per cases;
suppose A is empty;
then Carrier L c={} by RLVECT_2:def 6;
hence thesis;
end;
suppose A is non empty;
then consider p be VECTOR of V such that
A4: p in A and
A5: (-p+A)\{0.V} is linearly-independent by A1;
A6: A\/{p}=A by A4,ZFMISC_1:40;
consider Lp be Linear_Combination of{p} such that
A7: Lp.p=L.p by RLVECT_4:37;
set LLp=L-Lp;
(-p+LLp).(0.V)=LLp.(0.V-(-p)) by Def1
.=LLp.(-(-p)) by RLVECT_1:14
.=LLp.p
.=L.p-Lp.p by RLVECT_2:54
.=0 by A7;
then A8: not 0.V in Carrier(-p+LLp) by RLVECT_2:19;
A9: Carrier Lp c={p} by RLVECT_2:def 6;
then A10: Carrier Lp={p} or Carrier Lp={} by ZFMISC_1:33;
Carrier L c=A by RLVECT_2:def 6;
then Carrier LLp c=Carrier L\/Carrier Lp &
Carrier L\/Carrier Lp c=A\/{p} by A9,RLVECT_2:55,XBOOLE_1:13;
then A11: Carrier LLp c=A by A6;
Carrier(-p+LLp)=-p+Carrier LLp by Th16;
then Carrier(-p+LLp)c=-p+A by A11,RLTOPSP1:8;
then Carrier(-p+LLp)c=(-p+A)\{0.V} by A8,ZFMISC_1:34;
then A12: -p+LLp is Linear_Combination of(-p+A)\{0.V} by RLVECT_2:def 6;
A13: LLp+Lp=L+(Lp-Lp) by RLVECT_2:40
.=L+ZeroLC(V) by RLVECT_2:57
.=L by RLVECT_2:41;
A14: Sum(-p+Lp)=(sum Lp)*(-p)+Sum Lp by Th39
.=(sum Lp)*(-p)+Lp.p*p by RLVECT_2:32
.=Lp.p*(-p)+Lp.p*p by A9,Th32
.=Lp.p*(-p+p) by RLVECT_1:def 5
.=Lp.p*0.V by RLVECT_1:5
.=0.V;
Sum(-p+L)=(sum L)*(-p)+Sum L by Th39
.=0.V+0.V by A2,A3,RLVECT_1:10
.=0.V;
then 0.V=Sum((-p+LLp)+(-p+Lp)) by A13,Th17
.=Sum(-p+LLp)+0.V by A14,RLVECT_3:1
.=Sum(-p+LLp);
then {}=Carrier(-p+LLp) by A5,A12;
then A15: ZeroLC(V)=-p+LLp by RLVECT_2:def 5;
A16: LLp=0.V+LLp by Th21
.=(p+-p)+LLp by RLVECT_1:5
.=p+(-p+LLp) by Th19
.=ZeroLC(V) by A15,Th20;
then sum LLp=0 by Th31;
then 0=0+sum Lp by A3,A13,Th34
.=0+Lp.p by A9,Th32;
then not p in Carrier Lp by RLVECT_2:19;
then Carrier LLp\/Carrier Lp={} by A10,A16,RLVECT_2:def 5,TARSKI:def 1;
then Carrier L c={} by A13,RLVECT_2:37;
hence thesis;
end;
end;
Lm6: for A st for L be Linear_Combination of A st Sum L=0.V & sum L=0 holds
Carrier L={} for p be VECTOR of V st p in A holds(-p+A)\{0.V} is
linearly-independent
proof
let A such that
A1: for L be Linear_Combination of A st Sum L=0.V & sum L=0 holds Carrier L=
{};
let p be Element of V such that
A2: p in A;
set pA=-p+A,pA0=(-p+A)\{0.V};
-p+p=0.V by RLVECT_1:5;
then 0.V in pA by A2;
then A3: pA0\/{0.V}=pA by ZFMISC_1:116;
let L be Linear_Combination of pA0 such that
A4: Sum L=0.V;
reconsider sL=-(sum L) as Real;
consider Lp be Linear_Combination of{0.V} such that
A5: Lp.0.V=sL by RLVECT_4:37;
set LLp=L+Lp;
set pLLp=p+LLp;
A6: Carrier Lp c={0.V} by RLVECT_2:def 6;
A7: p+pA=(p+-p)+A by Th5
.=0.V+A by RLVECT_1:5
.=A by Th6;
A8: Carrier pLLp=p+Carrier LLp by Th16;
A9: Carrier L c=pA0 by RLVECT_2:def 6;
then Carrier LLp c=Carrier L\/Carrier Lp & Carrier L\/Carrier Lp c=pA0\/{0.V}
by A6,RLVECT_2:37,XBOOLE_1:13;
then Carrier LLp c=pA by A3;
then Carrier pLLp c=A by A7,A8,RLTOPSP1:8;
then A10: pLLp is Linear_Combination of A by RLVECT_2:def 6;
A11: sum pLLp=sum LLp by Th37;
A12: sum LLp=sum L+sum Lp by Th34
.=sum L+sL by A5,A6,Th32
.=0;
then Sum pLLp=0*p+Sum LLp by Th39
.=0.V+Sum LLp by RLVECT_1:10
.=Sum LLp
.=Sum L+Sum Lp by RLVECT_3:1
.=Sum Lp by A4
.=Lp.0.V*0.V by RLVECT_2:32
.=0.V;
then Carrier pLLp={} by A1,A10,A11,A12;
then A13: pLLp=ZeroLC(V) by RLVECT_2:def 5;
A14: LLp=0.V+LLp by Th21
.=(-p+p)+LLp by RLVECT_1:5
.=-p+ZeroLC(V) by A13,Th19
.=ZeroLC(V) by Th20;
assume Carrier L<>{};
then consider v be object such that
A15: v in Carrier L by XBOOLE_0:def 1;
reconsider v as Element of V by A15;
not v in Carrier Lp by A6,A9,A15,XBOOLE_0:def 5;
then Lp.v=0;
then L.v+0=LLp.v by RLVECT_2:def 10
.=0 by A14,RLVECT_2:20;
hence contradiction by A15,RLVECT_2:19;
end;
theorem Th41:
A is affinely-independent iff for v st v in A holds
-v + A\{0.V} is linearly-independent
proof
hereby assume A is affinely-independent;
then for L be Linear_Combination of A st Sum L=0.V & sum L=0 holds Carrier L
={} by Lm5;
hence for v st v in A holds(-v+A)\{0.V} is linearly-independent by Lm6;
end;
assume A1: for v st v in A holds(-v+A)\{0.V} is linearly-independent;
assume A is non empty;
then consider v be object such that
A2: v in A;
reconsider v as Element of V by A2;
take v;
thus thesis by A1,A2;
end;
theorem Th42:
A is affinely-independent iff
for L be Linear_Combination of A st Sum L = 0.V & sum L = 0
holds Carrier L = {}
proof
thus A is affinely-independent implies for L be Linear_Combination of A st
Sum L=0.V & sum L=0 holds Carrier L={} by Lm5;
assume for L be Linear_Combination of A st Sum L=0.V & sum L=0 holds Carrier
L={};
then for p be VECTOR of V st p in A holds(-p+A)\{0.V} is linearly-independent
by Lm6;
hence thesis by Th41;
end;
theorem Th43:
A is affinely-independent & B c= A implies B is affinely-independent
proof
assume that
A1: A is affinely-independent and
A2: B c=A;
now let L be Linear_Combination of B such that
A3: Sum L=0.V & sum L=0;
L is Linear_Combination of A by A2,RLVECT_2:21;
hence Carrier L={} by A1,A3,Th42;
end;
hence thesis by Th42;
end;
registration
let V;
cluster linearly-independent -> affinely-independent for Subset of V;
coherence
proof
let A;
assume A is linearly-independent;
then for L be Linear_Combination of A st Sum L=0.V & sum L=0 holds Carrier L=
{};
hence thesis by Th42;
end;
end;
reserve I for affinely-independent Subset of V;
registration
let V,I,v;
cluster v + I -> affinely-independent;
coherence
proof
set vI=v+I;
now let L be Linear_Combination of vI such that
A1: Sum L=0.V and
A2: sum L=0;
set vL=-v+L;
A3: sum vL=0 by A2,Th37;
A4: Carrier vL=-v+Carrier L & Carrier L c=vI by Th16,RLVECT_2:def 6;
-v+vI=(-v+v)+I by Th5
.=0.V+I by RLVECT_1:5
.=I by Th6;
then Carrier vL c=I by A4,RLTOPSP1:8;
then A5: vL is Linear_Combination of I by RLVECT_2:def 6;
Sum vL=0*(-v)+0.V by A1,A2,Th39
.=0.V+0.V by RLVECT_1:10
.=0.V;
then Carrier vL={} by A3,A5,Th42;
then A6: vL=ZeroLC(V) by RLVECT_2:def 5;
L=0.V+L by Th21
.=(v+-v)+L by RLVECT_1:5
.=v+ZeroLC(V) by A6,Th19
.=ZeroLC(V) by Th20;
hence Carrier L={} by RLVECT_2:def 5;
end;
hence thesis by Th42;
end;
end;
theorem
v+A is affinely-independent implies A is affinely-independent
proof
assume A1: v+A is affinely-independent;
-v+(v+A)=(-v+v)+A by Th5
.=0.V+A by RLVECT_1:5
.=A by Th6;
hence thesis by A1;
end;
registration
let V,I,r;
cluster r*I -> affinely-independent;
coherence
proof
per cases;
suppose r=0;
then r*I c={0.V} by Th12;
then r*I={}V or r*I={0.V} by ZFMISC_1:33;
hence thesis;
end;
suppose A1: r<>0;
now let L be Linear_Combination of r*I such that
A2: Sum L=0.V and
A3: sum L=0;
set rL=r"(*)L;
A4: Sum rL=r"*0.V by A2,Th40
.=0.V;
A5: Carrier rL=r"*Carrier L & Carrier L c=r*I by A1,Th23,RLVECT_2:def 6;
r"*(r*I)=(r"*r)*I by Th10
.=1*I by A1,XCMPLX_0:def 7
.=I by Th11;
then Carrier rL c=I by A5,Th9;
then A6: rL is Linear_Combination of I by RLVECT_2:def 6;
sum rL=0 by A1,A3,Th38;
then Carrier rL={} by A4,A6,Th42;
then A7: rL=ZeroLC(V) by RLVECT_2:def 5;
L=1(*)L by Th28
.=(r*r")(*)L by A1,XCMPLX_0:def 7
.=r(*)ZeroLC(V) by A7,Th27
.=ZeroLC(V) by Th26;
hence Carrier L={} by RLVECT_2:def 5;
end;
hence thesis by Th42;
end;
end;
end;
theorem
r * A is affinely-independent & r <> 0 implies A is affinely-independent
proof
assume that
A1: r*A is affinely-independent and
A2: r<>0;
r"*(r*A)=(r"*r)*A by Th10
.=1*A by A2,XCMPLX_0:def 7
.=A by Th11;
hence thesis by A1;
end;
theorem Th46:
0.V in A implies
(A is affinely-independent iff A \ {0.V} is linearly-independent)
proof
assume A1: 0.V in A;
A2: -0.V+A=0.V+A
.=A by Th6;
hence A is affinely-independent implies A\{0.V} is linearly-independent by A1
,Th41;
assume A\{0.V} is linearly-independent;
hence thesis by A1,A2;
end;
definition
let V;
let F be Subset-Family of V;
attr F is affinely-independent means
A in F implies A is affinely-independent;
end;
registration
let V;
cluster empty -> affinely-independent for Subset-Family of V;
coherence;
let I;
cluster {I} -> affinely-independent for Subset-Family of V;
coherence
by TARSKI:def 1;
end;
registration
let V;
cluster empty affinely-independent for Subset-Family of V;
existence
proof
take{}bool the carrier of V;
thus thesis;
end;
cluster non empty affinely-independent for Subset-Family of V;
existence
proof
take{the affinely-independent Subset of V};
thus thesis;
end;
end;
theorem
F1 is affinely-independent & F2 is affinely-independent implies
F1 \/ F2 is affinely-independent
proof
assume A1: F1 is affinely-independent & F2 is affinely-independent;
let A;
assume A in F1\/F2;
then A in F1 or A in F2 by XBOOLE_0:def 3;
hence thesis by A1;
end;
theorem
F1 c= F2 & F2 is affinely-independent implies F1 is affinely-independent;
begin :: Affine Hull
definition
let RLS;
let A be Subset of RLS;
func Affin A -> Subset of RLS equals
meet {B where B is Affine Subset of RLS : A c= B};
coherence
proof
set BB={B where B is Affine Subset of RLS:A c=B};
BB c=bool[#]RLS
proof
let x be object;
assume x in BB;
then ex B be Affine Subset of RLS st x=B & A c=B;
hence thesis;
end;
then reconsider BB as Subset-Family of RLS;
meet BB is Subset of RLS;
hence thesis;
end;
end;
registration
let RLS;
let A be Subset of RLS;
cluster Affin A -> Affine;
coherence
proof
set BB={B where B is Affine Subset of RLS:A c=B};
let v1,v2 be Element of RLS;
let r be Real;
assume that
A1: v1 in Affin A and
A2: v2 in Affin A;
A3: now let Y be set;
assume A4: Y in BB;
then consider B be Affine Subset of RLS such that
A5: Y=B and
A c=B;
v1 in B & v2 in B by A1,A2,A4,A5,SETFAM_1:def 1;
hence (1-r)*v1+r*v2 in Y by A5,RUSUB_4:def 4;
end;
BB<>{} by A1,SETFAM_1:def 1;
hence thesis by A3,SETFAM_1:def 1;
end;
end;
Lm7: for V be non empty RLSStruct for A be Subset of V holds A c= Affin A
proof
let V be non empty RLSStruct;
let A be Subset of V;
set BB={B where B is Affine Subset of V:A c=B};
A1: now let Y be set;
assume Y in BB;
then ex B be Affine Subset of V st Y=B & A c=B;
hence A c=Y;
end;
[#]V is Affine;
then [#]V in BB;
hence thesis by A1,SETFAM_1:5;
end;
Lm8: for V be non empty RLSStruct for A be Affine Subset of V holds A=Affin A
proof
let V be non empty RLSStruct;
let A be Affine Subset of V;
set BB={B where B is Affine Subset of V:A c=B};
A in BB;
then A1: Affin A c=A by SETFAM_1:3;
A c=Affin A by Lm7;
hence thesis by A1;
end;
registration
let RLS;
let A be empty Subset of RLS;
cluster Affin A -> empty;
coherence
proof
{}RLS is Affine;
hence thesis by Lm8;
end;
end;
registration
let RLS;
let A be non empty Subset of RLS;
cluster Affin A -> non empty;
coherence
proof
A c=Affin A by Lm7;
hence thesis;
end;
end;
theorem
for A be Subset of RLS holds A c= Affin A by Lm7;
theorem
for A be Affine Subset of RLS holds A = Affin A by Lm8;
theorem Th51:
for A,B be Subset of RLS st A c= B & B is Affine holds Affin A c= B
proof
let A,B be Subset of RLS;
assume A c=B & B is Affine;
then B in {C where C is Affine Subset of RLS:A c=C};
hence thesis by SETFAM_1:3;
end;
theorem Th52:
for A,B be Subset of RLS st A c= B holds Affin A c= Affin B
proof
let A,B be Subset of RLS;
assume A1: A c=B;
B c=Affin B by Lm7;
then A c=Affin B by A1;
hence thesis by Th51;
end;
theorem Th53:
Affin (v+A) = v + Affin A
proof
v+A c=v+Affin A by Lm7,RLTOPSP1:8;
then A1: Affin(v+A)c=v+Affin A by Th51,RUSUB_4:31;
-v+(v+A)=(-v+v)+A by Th5
.=0.V+A by RLVECT_1:5
.=A by Th6;
then A c=-v+Affin(v+A) by Lm7,RLTOPSP1:8;
then A2: Affin A c=-v+Affin(v+A) by Th51,RUSUB_4:31;
v+(-v+Affin(v+A))=(v+-v)+Affin(v+A) by Th5
.=0.V+Affin(v+A) by RLVECT_1:5
.=Affin(v+A) by Th6;
then v+Affin A c=Affin(v+A) by A2,RLTOPSP1:8;
hence thesis by A1;
end;
theorem Th54:
AR is Affine implies r * AR is Affine
proof
assume A1: AR is Affine;
let v1,v2 be VECTOR of R,s;
assume v1 in r*AR;
then consider w1 be Element of R such that
A2: v1=r*w1 and
A3: w1 in AR;
assume v2 in r*AR;
then consider w2 be Element of R such that
A4: v2=r*w2 and
A5: w2 in AR;
A6: (1-s)*w1+s*w2 in AR by A1,A3,A5;
A7: (1-s)*(r*w1)=((1-s)*r)*w1 by RLVECT_1:def 7
.=r*((1-s)*w1) by RLVECT_1:def 7;
s*(r*w2)=(s*r)*w2 by RLVECT_1:def 7
.=r*(s*w2) by RLVECT_1:def 7;
then (1-s)*v1+s*v2=r*((1-s)*w1+s*w2) by A2,A4,A7,RLVECT_1:def 5;
hence thesis by A6;
end;
theorem Th55:
r <> 0 implies Affin (r*AR) = r * Affin AR
proof
assume A1: r<>0;
r"*(r*AR)=(r"*r)*AR by Th10
.=1*AR by A1,XCMPLX_0:def 7
.=AR by Th11;
then AR c=r"*Affin(r*AR) by Lm7,Th9;
then A2: Affin AR c=r"*Affin(r*AR) by Th51,Th54;
r*AR c=r*Affin AR by Lm7,Th9;
then A3: Affin(r*AR)c=r*Affin AR by Th51,Th54;
r*(r"*Affin(r*AR))=(r*r")*Affin(r*AR) by Th10
.=1*Affin(r*AR) by A1,XCMPLX_0:def 7
.=Affin(r*AR) by Th11;
then r*Affin AR c=Affin(r*AR) by A2,Th9;
hence thesis by A3;
end;
theorem
Affin (r*A) = r * Affin A
proof
per cases;
suppose A1: r=0;
then A2: r*Affin A c={0.V} by Th12;
A3: r*Affin A c=r*A
proof
let x be object;
assume A4: x in r*Affin A;
then ex v be Element of V st x=r*v & v in Affin A;
then A is non empty;
then consider v be object such that
A5: v in A;
reconsider v as Element of V by A5;
A6: r*v in r*A by A5;
x=0.V by A2,A4,TARSKI:def 1;
hence thesis by A1,A6,RLVECT_1:10;
end;
r*A c={0.V} by A1,Th12;
then A7: r*A is empty or r*A={0.V} by ZFMISC_1:33;
{0.V} is Affine by RUSUB_4:23;
then A8: Affin(r*A)=r*A by A7,Lm8;
r*A c=r*Affin A by Lm7,Th9;
hence thesis by A3,A8;
end;
suppose r<>0;
hence thesis by Th55;
end;
end;
theorem Th57:
v in Affin A implies Affin A = v + Up Lin (-v+A)
proof
set vA=-v+A;
set BB={B where B is Affine Subset of V:vA c=B};
A1: -v+A c=Up(Lin(-v+A))
proof
let x be object;
assume x in -v+A;
then x in Lin(-v+A) by RLVECT_3:15;
hence thesis;
end;
Up(Lin vA) is Affine by RUSUB_4:24;
then A2: Up(Lin vA) in BB by A1;
then A3: Affin vA c=Up(Lin vA) by SETFAM_1:3;
assume v in Affin A;
then -v+v in -v+Affin A;
then A4: 0.V in -v+Affin A by RLVECT_1:5;
now let Y be set;
A5: Affin vA=-v+Affin A by Th53;
assume Y in BB;
then consider B be Affine Subset of V such that
A6: Y=B and
A7: vA c=B;
A8: Lin vA is Subspace of Lin B by A7,RLVECT_3:20;
Affin vA c=B by A7,Th51;
then B=the carrier of Lin B by A4,A5,RUSUB_4:26;
hence Up(Lin vA)c=Y by A6,A8,RLSUB_1:def 2;
end;
then Up(Lin(-v+A))c=Affin vA by A2,SETFAM_1:5;
then Up(Lin(-v+A))=Affin vA by A3;
hence v+Up(Lin(-v+A))=v+(-v+Affin A) by Th53
.=(v+-v)+Affin A by Th5
.=0.V+Affin A by RLVECT_1:5
.=Affin A by Th6;
end;
Lm9: Lin(A\/{0.V})=Lin A
proof
{0.V}=the carrier of(0).V by RLSUB_1:def 3;
then Lin{0.V}=(0).V by RLVECT_3:18;
hence Lin(A\/{0.V})=(Lin A)+(0).V by RLVECT_3:22
.=Lin A by RLSUB_2:9;
end;
theorem Th58:
A is affinely-independent iff for B st B c= A & Affin A = Affin B holds A = B
proof
hereby assume A1: A is affinely-independent;
let B;
assume that
A2: B c=A and
A3: Affin A=Affin B;
assume A<>B;
then B c<A by A2;
then consider p be object such that
A4: p in A and
A5: not p in B by XBOOLE_0:6;
reconsider p as Element of V by A4;
A6: A\{p}c=Affin(A\{p}) by Lm7;
B is non empty by A3,A4;
then consider q be object such that
A7: q in B;
reconsider q as Element of V by A7;
-(-q)=q;
then A8: -q+p<>0.V by A5,A7,RLVECT_1:def 10;
set qA0=(-q+A)\{0.V};
-q+p in -q+A by A4;
then A9: -q+p in qA0 by A8,ZFMISC_1:56;
qA0 is linearly-independent by A1,A2,A7,Th41;
then A10: not-q+p in Lin(qA0\{-q+p}) by A9,RLVECT_5:17;
A11: q+(-q+p)=q+-q+p by RLVECT_1:def 3
.=0.V+p by RLVECT_1:5
.=p;
-q+q=0.V by RLVECT_1:5;
then 0.V in -q+A by A2,A7;
then A12: 0.V in -q+A\{-q+p} by A8,ZFMISC_1:56;
(-q+A)\{0.V}\{-q+p}=(-q+A)\({0.V}\/{-q+p}) by XBOOLE_1:41
.=(-q+A)\{-q+p}\{0.V} by XBOOLE_1:41;
then A13: Lin(qA0\{-q+p})=Lin(((-q+A)\{-q+p}\{0.V})\/{0.V}) by Lm9
.=Lin((-q+A)\{-q+p}) by A12,ZFMISC_1:116
.=Lin((-q+A)\(-q+{p})) by Lm3
.=Lin(-q+(A\{p})) by Lm2;
q in A\{p} by A2,A5,A7,ZFMISC_1:56;
then A14: Affin(A\{p})=q+Up Lin(qA0\{-q+p}) by A6,A13,Th57;
A15: not p in Affin(A\{p})
proof
assume p in Affin(A\{p});
then consider v be Element of V such that
A16: p=q+v and
A17: v in Up Lin(qA0\{-q+p}) by A14;
-q+p=v by A11,A16,RLVECT_1:8;
hence thesis by A10,A17;
end;
B c=A\{p} by A2,A5,ZFMISC_1:34;
then A18: Affin B c=Affin(A\{p}) by Th52;
Affin(A\{p})c=Affin A by Th52,XBOOLE_1:36;
then A19: Affin A=Affin(A\{p}) by A3,A18;
A c=Affin A by Lm7;
hence contradiction by A4,A15,A19;
end;
assume A20: for B st B c=A & Affin A=Affin B holds A=B;
assume A is non affinely-independent;
then consider p be Element of V such that
A21: p in A and
A22: (-p+A)\{0.V} is non linearly-independent by Th41;
set L=Lin((-p+A)\{0.V});
(-p+A)\{0.V}c=the carrier of L
proof
let x be object;
assume x in (-p+A)\{0.V};
then x in L by RLVECT_3:15;
hence thesis;
end;
then reconsider pA0=(-p+A)\{0.V} as Subset of L;
-p+p=0.V by RLVECT_1:5;
then A23: 0.V in -p+A by A21;
then A24: pA0\/{0.V}=-p+A by ZFMISC_1:116;
Lin(pA0)=L by RLVECT_5:20;
then consider b be Subset of L such that
A25: b c=pA0 and
A26: b is linearly-independent and
A27: Lin(b)=L by RLVECT_3:25;
reconsider B=b as linearly-independent Subset of V by A26,RLVECT_5:14;
A28: B\/{0.V}c=pA0\/{0.V} by A25,XBOOLE_1:9;
0.V in {0.V} by TARSKI:def 1;
then p+0.V=p & 0.V in B\/{0.V} by XBOOLE_0:def 3;
then A29: p in p+(B\/{0.V});
A30: p+(B\/{0.V})c=Affin(p+(B\/{0.V})) by Lm7;
A31: p+(-p+A)=(p+-p)+A by Th5
.=0.V+A by RLVECT_1:5
.=A by Th6;
A32: -p+(p+(B\/{0.V}))=(-p+p)+(B\/{0.V}) by Th5
.=0.V+(B\/{0.V}) by RLVECT_1:5
.=B\/{0.V} by Th6;
A c=Affin A by Lm7;
then Affin A=p+Up Lin(-p+A) by A21,Th57
.=p+Up Lin((-p+A)\{0.V}\/{0.V}) by A23,ZFMISC_1:116
.=p+Up L by Lm9
.=p+Up Lin(B) by A27,RLVECT_5:20
.=p+Up Lin(-p+(p+(B\/{0.V}))) by A32,Lm9
.=Affin(p+(B\/{0.V})) by A29,A30,Th57;
then pA0=(B\/{0.V})\{0.V} by A20,A24,A28,A31,A32,RLTOPSP1:8
.=B by RLVECT_3:6,ZFMISC_1:117;
hence contradiction by A22;
end;
theorem Th59:
Affin A = {Sum L where L is Linear_Combination of A : sum L=1}
proof
set S={Sum L where L is Linear_Combination of A:sum L=1};
per cases;
suppose A1: A is empty;
assume Affin A<>S;
then S is non empty by A1;
then consider x being object such that
A2: x in S;
consider L be Linear_Combination of A such that
x=Sum L and
A3: sum L=1 by A2;
A={}(the carrier of V) by A1;
then L=ZeroLC(V) by RLVECT_2:23;
hence thesis by A3,Th31;
end;
suppose A is non empty;
then consider p be object such that
A4: p in A;
reconsider p as Element of V by A4;
A c=Affin A by Lm7;
then A5: Affin A=p+Up Lin(-p+A) by A4,Th57;
thus Affin A c=S
proof
let x be object;
assume x in Affin A;
then consider v such that
A6: x=p+v and
A7: v in Up Lin(-p+A) by A5;
v in Lin(-p+A) by A7;
then consider L be Linear_Combination of-p+A such that
A8: Sum L=v by RLVECT_3:14;
set pL=p+L;
consider Lp be Linear_Combination of{0.V} such that
A9: Lp.0.V=1-sum L by RLVECT_4:37;
set pLL=p+(L+Lp);
set pLp=p+Lp;
A10: Carrier Lp c={0.V} by RLVECT_2:def 6;
then A11: p+Carrier Lp c=p+{0.V} by RLTOPSP1:8;
A12: Carrier pL=p+Carrier L & Carrier L c=-p+A by Th16,RLVECT_2:def 6;
p+(-p+A)=(p+-p)+A by Th5
.=0.V+A by RLVECT_1:5
.=A by Th6;
then A13: Carrier pL c=A by A12,RLTOPSP1:8;
A14: Carrier(pL+pLp)c=Carrier pL\/Carrier pLp & pLL=pL+pLp by Th17,
RLVECT_2:37;
Carrier pLp=p+Carrier Lp & p+{0.V}={p+0.V} by Lm3,Th16;
then Carrier pLp c={p} by A11;
then Carrier pL\/Carrier pLp c=A\/{p} by A13,XBOOLE_1:13;
then Carrier pLL c=A\/{p} by A14;
then Carrier pLL c=A by A4,ZFMISC_1:40;
then A15: pLL is Linear_Combination of A by RLVECT_2:def 6;
A16: sum pLL=sum(L+Lp) by Th37;
A17: sum(L+Lp)=sum L+sum Lp by Th34
.=sum L+(1-sum L) by A9,A10,Th32
.=1;
then Sum pLL=1*p+Sum(L+Lp) by Th39
.=1*p+(v+Sum Lp) by A8,RLVECT_3:1
.=1*p+(v+Lp.0.V*0.V) by RLVECT_2:32
.=1*p+(v+0.V)
.=p+(v+0.V) by RLVECT_1:def 8
.=x by A6;
hence thesis by A15,A16,A17;
end;
let x be object;
assume x in S;
then consider L be Linear_Combination of A such that
A18: Sum L=x and
A19: sum L=1;
set pL=-p+L;
Carrier L c=A by RLVECT_2:def 6;
then A20: -p+Carrier L c=-p+A by RLTOPSP1:8;
-p+Carrier L=Carrier pL by Th16;
then pL is Linear_Combination of-p+A by A20,RLVECT_2:def 6;
then Sum pL in Lin(-p+A) by RLVECT_3:14;
then A21: Sum pL in Up Lin(-p+A);
p+Sum pL=p+(1*(-p)+Sum L) by A19,Th39
.=p+(-p+Sum L) by RLVECT_1:def 8
.=(p+-p)+Sum L by RLVECT_1:def 3
.=0.V+Sum L by RLVECT_1:5
.=x by A18;
hence thesis by A5,A21;
end;
end;
theorem Th60:
I c=A implies ex Ia be affinely-independent Subset of V st
I c= Ia & Ia c= A & Affin Ia = Affin A
proof
assume A1: I c=A;
A2: A c=Affin A by Lm7;
per cases;
suppose A3: I is empty;
per cases;
suppose A4: A is empty;
take I;
thus thesis by A3,A4;
end;
suppose A is non empty;
then consider p be object such that
A5: p in A;
reconsider p as Element of V by A5;
set L=Lin(-p+A);
-p+A c=[#]L
proof
let x be object;
assume x in -p+A;
then x in Lin(-p+A) by RLVECT_3:15;
hence thesis;
end;
then reconsider pA=-p+A as Subset of L;
L=Lin(pA) by RLVECT_5:20;
then consider Ia be Subset of L such that
A6: Ia c=pA and
A7: Ia is linearly-independent and
A8: Lin(Ia)=L by RLVECT_3:25;
[#]L c=[#]V by RLSUB_1:def 2;
then reconsider IA=Ia as Subset of V by XBOOLE_1:1;
set IA0=IA\/{0.V};
not 0.V in IA by A7,RLVECT_3:6,RLVECT_5:14;
then A9: IA0\{0.V}=IA by ZFMISC_1:117;
0.V in {0.V} by TARSKI:def 1;
then A10: 0.V in IA0 by XBOOLE_0:def 3;
IA is linearly-independent by A7,RLVECT_5:14;
then IA0 is affinely-independent by A9,A10,Th46;
then reconsider pIA0=p+IA0 as affinely-independent Subset of V;
take pIA0;
thus I c=pIA0 by A3;
thus pIA0 c=A
proof
let x be object;
assume x in pIA0;
then consider v such that
A11: x=p+v and
A12: v in IA0;
per cases by A12,XBOOLE_0:def 3;
suppose v in IA;
then v in {-p+w:w in A} by A6;
then consider w such that
A13: v=-p+w and
A14: w in A;
x=(p+-p)+w by A11,A13,RLVECT_1:def 3
.=0.V+w by RLVECT_1:5
.=w;
hence thesis by A14;
end;
suppose v in {0.V};
then v=0.V by TARSKI:def 1;
hence thesis by A5,A11;
end;
end;
A15: pIA0 c=Affin pIA0 by Lm7;
A16: -p+pIA0=(-p+p)+IA0 by Th5
.=0.V+IA0 by RLVECT_1:5
.=IA0 by Th6;
p+0.V=p;
then p in pIA0 by A10;
hence Affin pIA0=p+Up Lin(IA0) by A15,A16,Th57
.=p+Up Lin(IA) by Lm9
.=p+Up L by A8,RLVECT_5:20
.=Affin A by A2,A5,Th57;
end;
end;
suppose I is non empty;
then consider p be object such that
A17: p in I;
reconsider p as Element of V by A17;
A18: (-p+I)\{0.V} is linearly-independent by A17,Th41;
A19: -p+I c=-p+A by A1,RLTOPSP1:8;
set L=Lin(-p+A);
A20: -p+I\{0.V}c=-p+I by XBOOLE_1:36;
A21: -p+A c=Up L
proof
let x be object;
assume x in -p+A;
then x in L by RLVECT_3:15;
hence thesis;
end;
then -p+I c=Up L by A19;
then reconsider pI0=-p+I\{0.V},pA=-p+A as Subset of L by A20,A21,XBOOLE_1:1;
L=Lin(pA) & pI0 c=pA by A19,A20,RLVECT_5:20;
then consider i be linearly-independent Subset of L such that
A22: pI0 c=i and
A23: i c=pA and
A24: Lin(i)=L by A18,Th15,RLVECT_5:15;
reconsider Ia=i as linearly-independent Subset of V by RLVECT_5:14;
set I0=Ia\/{0.V};
A25: 0.V in {0.V} by TARSKI:def 1;
not 0.V in Ia by RLVECT_3:6;
then I0\{0.V}=Ia & 0.V in I0 by A25,XBOOLE_0:def 3,ZFMISC_1:117;
then I0 is affinely-independent by Th46;
then reconsider pI0=p+I0 as affinely-independent Subset of V;
take pI0;
A26: -p+p=0.V by RLVECT_1:5;
then A27: p+(-p+I)=0.V+I by Th5
.=I by Th6;
0.V in {-p+v where v is Element of V:v in I} by A17,A26;
then (-p+I\{0.V})\/{0.V}=-p+I by ZFMISC_1:116;
then A28: -p+I c=I0 by A22,XBOOLE_1:9;
hence I c=pI0 by A27,RLTOPSP1:8;
p+(-p+I)c=pI0 by A28,RLTOPSP1:8;
then A29: p in pI0 by A17,A27;
thus pI0 c=A
proof
let x be object;
assume x in pI0;
then consider v such that
A30: x=p+v and
A31: v in I0;
per cases by A31,XBOOLE_0:def 3;
suppose v in Ia;
then v in {-p+w:w in A} by A23;
then consider w such that
A32: v=-p+w and
A33: w in A;
x=(p+-p)+w by A30,A32,RLVECT_1:def 3
.=0.V+w by RLVECT_1:5
.=w;
hence thesis by A33;
end;
suppose v in {0.V};
then v=0.V by TARSKI:def 1;
then x=p by A30;
hence thesis by A1,A17;
end;
end;
A34: pI0 c=Affin pI0 by Lm7;
A35: p in A by A1,A17;
-p+pI0=0.V+I0 by A26,Th5
.=I0 by Th6;
hence Affin pI0=p+Up Lin(I0) by A29,A34,Th57
.=p+Up Lin(Ia) by Lm9
.=p+Up L by A24,RLVECT_5:20
.=Affin A by A2,A35,Th57;
end;
end;
theorem
for A,B be finite Subset of V st
A is affinely-independent & Affin A = Affin B & card B <= card A
holds B is affinely-independent
proof
let A,B be finite Subset of V;
assume that
A1: A is affinely-independent and
A2: Affin A=Affin B and
A3: card B<=card A;
per cases;
suppose A is empty;
then B is empty by A3,XXREAL_0:1;
hence thesis;
end;
suppose A is non empty;
then consider p be object such that
A4: p in A;
card A>0 by A4;
then reconsider n=(card A)-1 as Element of NAT by NAT_1:20;
A5: A c=Affin A by Lm7;
reconsider p as Element of V by A4;
set L=Lin(-p+A);
{}V c=B;
then consider Ia be affinely-independent Subset of V such that
{}V c=Ia and
A6: Ia c=B and
A7: Affin Ia=Affin B by Th60;
Ia is non empty by A2,A4,A7;
then consider q be object such that
A8: q in Ia;
-p+A c=[#]L
proof
let x be object;
assume x in -p+A;
then x in Lin(-p+A) by RLVECT_3:15;
hence thesis;
end;
then reconsider pA=-p+A as Subset of L;
-p+A\{0.V} is linearly-independent by A1,A4,Th41;
then A9: pA\{0.V} is linearly-independent by RLVECT_5:15;
-p+p=0.V by RLVECT_1:5;
then A10: 0.V in pA by A4;
then L=Lin(((-p+A)\{0.V})\/{0.V}) by ZFMISC_1:116
.=Lin(-p+A\{0.V}) by Lm9
.=Lin(pA\{0.V}) by RLVECT_5:20;
then A11: pA\{0.V} is Basis of L by A9,RLVECT_3:def 3;
reconsider IA=Ia as finite set by A6;
A12: Ia c=Affin Ia by Lm7;
reconsider q as Element of V by A8;
p+L=p+Up L by RUSUB_4:30
.=Affin A by A4,A5,Th57
.=q+Up Lin(-q+Ia) by A2,A7,A8,A12,Th57
.=q+Lin(-q+Ia) by RUSUB_4:30;
then A13: L=Lin(-q+Ia) by RLSUB_1:69;
set qI=-q+Ia;
A14: qI c=[#]Lin(-q+Ia)
proof
let x be object;
assume x in qI;
then x in Lin(-q+Ia) by RLVECT_3:15;
hence thesis;
end;
card pA=n+1 by Th7;
then A15: card(pA\{0.V})=n by A10,STIRL2_1:55;
then pA\{0.V} is finite;
then A16: L is finite-dimensional by A11,RLVECT_5:def 1;
reconsider qI as Subset of Lin(-q+Ia) by A14;
-q+Ia\{0.V} is linearly-independent by A8,Th41;
then A17: qI\{0.V} is linearly-independent by RLVECT_5:15;
-q+q=0.V by RLVECT_1:5;
then A18: 0.V in qI by A8;
then Lin(-q+Ia)=Lin(((-q+Ia)\{0.V})\/{0.V}) by ZFMISC_1:116
.=Lin(-q+Ia\{0.V}) by Lm9
.=Lin(qI\{0.V}) by RLVECT_5:20;
then qI\{0.V} is Basis of Lin(-q+Ia) by A17,RLVECT_3:def 3;
then A19: card(qI\{0.V})=n by A11,A13,A15,A16,RLVECT_5:25;
then (not 0.V in qI\{0.V}) & qI\{0.V} is finite by ZFMISC_1:56;
then A20: n+1=card((qI\{0.V})\/{0.V}) by A19,CARD_2:41
.=card qI by A18,ZFMISC_1:116
.=card Ia by Th7;
card IA<=card B by A6,NAT_1:43;
hence thesis by A3,A6,A20,CARD_2:102,XXREAL_0:1;
end;
end;
theorem Th62:
L is convex iff sum L = 1 & for v holds 0 <= L.v
proof
hereby assume L is convex;
then consider F be FinSequence of the carrier of V such that
A1: F is one-to-one and
A2: rng F=Carrier L and
A3: ex f be FinSequence of REAL st len f=len F & Sum(f)=1 & for n be Nat st
n in dom f holds f.n=L.(F.n) & f.n>=0;
consider f be FinSequence of REAL such that
A4: len f=len F and
A5: Sum(f)=1 and
A6: for n be Nat st n in dom f holds f.n=L.(F.n) & f.n>=0 by A3;
A7: len(L*F)=len F by FINSEQ_2:33;
now let k be Nat;
assume A8: 1<=k & k<=len F;
then k in dom f by A4,FINSEQ_3:25;
then A9: f.k=L.(F.k) by A6;
k in dom(L*F) by A7,A8,FINSEQ_3:25;
hence (L*F).k=f.k by A9,FUNCT_1:12;
end;
then L*F=f by A4,A7;
hence sum L=1 by A1,A2,A5,Def3;
let v be Element of V;
per cases;
suppose v in Carrier L;
then consider n be object such that
A10: n in dom F and
A11: F.n=v by A2,FUNCT_1:def 3;
A12: dom f=dom F by A4,FINSEQ_3:29;
then f.n=L.(F.n) by A6,A10;
hence L.v>=0 by A6,A10,A11,A12;
end;
suppose not v in Carrier L;
hence L.v>=0;
end;
end;
assume sum L=1;
then consider F be FinSequence of V such that
A13: F is one-to-one & rng F=Carrier L and
A14: 1=Sum(L*F) by Def3;
assume A15: for v be Element of V holds L.v>=0;
now take F;
thus F is one-to-one & rng F=Carrier L by A13;
take f=L*F;
thus len f=len F & Sum f=1 by A14,FINSEQ_2:33;
then A16: dom F=dom f by FINSEQ_3:29;
let n be Nat;
assume A17: n in dom f;
then A18: f.n=L.(F.n) by FUNCT_1:12;
F.n=F/.n by A16,A17,PARTFUN1:def 6;
hence f.n=L.(F.n) & f.n>=0 by A15,A18;
end;
hence thesis;
end;
theorem Th63:
L is convex implies L.x <= 1
proof
assume A1: L is convex;
then sum L=1 by Th62;
then consider F be FinSequence of V such that
F is one-to-one and
A2: rng F=Carrier L and
A3: 1=Sum(L*F) by Def3;
assume A4: L.x>1;
then x in dom L by FUNCT_1:def 2;
then reconsider v=x as Element of V by FUNCT_2:def 1;
v in Carrier L by A4;
then consider n be object such that
A5: n in dom F and
A6: F.n=v by A2,FUNCT_1:def 3;
len(L*F)=len F by FINSEQ_2:33;
then A7: dom(L*F)=dom F by FINSEQ_3:29;
A8: now let i be Nat;
assume i in dom(L*F);
then (L*F).i=L.(F.i) & F.i=F/.i by A7,FUNCT_1:12,PARTFUN1:def 6;
hence (L*F).i>=0 by A1,Th62;
end;
reconsider n as Nat by A5;
(L*F).n=L.x by A5,A6,A7,FUNCT_1:12;
hence contradiction by A3,A4,A5,A7,A8,MATRPROB:5;
end;
reconsider jj=1 as Real;
theorem Th64:
L is convex & L.x = 1 implies Carrier L = {x}
proof
assume that
A1: L is convex and
A2: L.x=1;
x in dom L by A2,FUNCT_1:def 2;
then reconsider v=x as Element of V by FUNCT_2:def 1;
consider K be Linear_Combination of{v} such that
A3: K.v=jj by RLVECT_4:37;
set LK=L-K;
A4: Carrier K c={v} by RLVECT_2:def 6;
sum LK=sum L-sum K by Th36
.=sum L-1 by A3,A4,Th32
.=1-1 by A1,Th62
.=0;
then consider F be FinSequence of V such that
F is one-to-one and
A5: rng F=Carrier LK and
A6: 0=Sum(LK*F) by Def3;
len(LK*F)=len F by FINSEQ_2:33;
then A7: dom(LK*F)=dom F by FINSEQ_3:29;
A8: for i be Nat st i in dom(LK*F) holds 0<=(LK*F).i
proof
let i be Nat;
assume A9: i in dom(LK*F);
then A10: (LK*F).i=LK.(F.i) by FUNCT_1:12;
A11: F.i in Carrier LK by A5,A7,A9,FUNCT_1:def 3;
then A12: LK.(F.i)=L.(F.i)-K.(F.i) by RLVECT_2:54;
per cases;
suppose F.i=v;
hence thesis by A2,A3,A9,A12,FUNCT_1:12;
end;
suppose F.i<>v;
then not F.i in Carrier K by A4,TARSKI:def 1;
then K.(F.i)=0 by A11;
hence thesis by A1,A10,A11,A12,Th62;
end;
end;
Carrier LK={}
proof
assume Carrier LK<>{};
then consider p be object such that
A13: p in Carrier LK by XBOOLE_0:def 1;
reconsider p as Element of V by A13;
consider i be object such that
A14: i in dom F and
A15: F.i=p by A5,A13,FUNCT_1:def 3;
reconsider i as Nat by A14;
(LK*F).i>0
proof
A16: LK.p=L.p-K.p by RLVECT_2:54;
per cases;
suppose p=v;
then LK.p=1-1 by A2,A3,RLVECT_2:54;
hence thesis by A13,RLVECT_2:19;
end;
suppose p<>v;
then not p in Carrier K by A4,TARSKI:def 1;
then K.p=0;
then A17: LK.p>=0 by A1,A16,Th62;
LK.p<>0 by A13,RLVECT_2:19;
hence thesis by A7,A14,A15,A17,FUNCT_1:12;
end;
end;
hence thesis by A6,A7,A8,A14,RVSUM_1:85;
end;
then ZeroLC(V)=L+(-K) by RLVECT_2:def 5;
then A18: -K=-L by RLVECT_2:50;
A19: v in Carrier L by A2;
-(-L)=L by RLVECT_2:53;
then K=L by A18,RLVECT_2:53;
hence thesis by A4,A19,ZFMISC_1:33;
end;
theorem Th65:
conv A c= Affin A
proof
let x be object;
assume A1: x in conv A;
then reconsider A1=A as non empty Subset of V;
conv(A1)={Sum(L) where L is Convex_Combination of A1:L in ConvexComb(V)}
by CONVEX3:5;
then consider L be Convex_Combination of A1 such that
A2: Sum L=x and
L in ConvexComb(V) by A1;
sum L=1 by Th62;
then x in {Sum K where K is Linear_Combination of A:sum K=1} by A2;
hence thesis by Th59;
end;
theorem
x in conv A & (conv A)\{x} is convex implies x in A
proof
assume that
A1: x in conv A and
A2: (conv A)\{x} is convex;
reconsider A1=A as non empty Subset of V by A1;
A3: conv(A1)={Sum(L) where L is Convex_Combination of A1:L in ConvexComb(V)}
by CONVEX3:5;
assume A4: not x in A;
consider L be Convex_Combination of A1 such that
A5: Sum L=x and
L in ConvexComb(V) by A1,A3;
set C=Carrier L;
A6: C c=A1 by RLVECT_2:def 6;
C<>{} by CONVEX1:21;
then consider p be object such that
A7: p in C by XBOOLE_0:def 1;
reconsider p as Element of V by A7;
A8: sum L=1 by Th62;
C\{p}<>{}
proof
assume A9: C\{p}={};
then C={p} by A7,ZFMISC_1:58;
then A10: L.p=1 by A8,Th32;
Sum L=L.p*p by A7,A9,RLVECT_2:35,ZFMISC_1:58;
then x=p by A5,A10,RLVECT_1:def 8;
hence thesis by A4,A6,A7;
end;
then consider q be object such that
A11: q in C\{p} by XBOOLE_0:def 1;
reconsider q as Element of V by A11;
A12: q in C by A11,XBOOLE_0:def 5;
then L.q<>0 by RLVECT_2:19;
then A13: L.q>0 by Th62;
reconsider mm=min(L.p,L.q) as Real;
consider Lq be Linear_Combination of{q} such that
A14: Lq.q=mm by RLVECT_4:37;
A15: p<>q by A11,ZFMISC_1:56;
then A16: p-q<>0.V by RLVECT_1:21;
A17: Carrier Lq c={q} by RLVECT_2:def 6;
{q}c=A by A6,A12,ZFMISC_1:31;
then Carrier Lq c=A by A17;
then A18: Lq is Linear_Combination of A by RLVECT_2:def 6;
consider Lp be Linear_Combination of{p} such that
A19: Lp.p=mm by RLVECT_4:37;
A20: Carrier Lp c={p} by RLVECT_2:def 6;
{p}c=A by A6,A7,ZFMISC_1:31;
then Carrier Lp c=A by A20;
then Lp is Linear_Combination of A by RLVECT_2:def 6;
then A21: Lp-Lq is Linear_Combination of A by A18,RLVECT_2:56;
then -(Lp-Lq) is Linear_Combination of A by RLVECT_2:52;
then reconsider Lpq=L+(Lp-Lq),L1pq=L-(Lp-Lq) as Linear_Combination of A1
by A21,RLVECT_2:38;
A22: Sum L1pq=Sum L-Sum(Lp-Lq) by RLVECT_3:4
.=Sum L+-Sum(Lp-Lq) by RLVECT_1:def 11;
L.p<>0 by A7,RLVECT_2:19;
then L.p>0 by Th62;
then A23: mm>0 by A13,XXREAL_0:15;
A24: for v holds L1pq.v>=0
proof
let v;
A25: L1pq.v=L.v-(Lp-Lq).v by RLVECT_2:54
.=L.v-(Lp.v-Lq.v) by RLVECT_2:54;
A26: L.v>=0 by Th62;
per cases;
suppose A27: v=q;
then not v in Carrier Lp by A15,A20,TARSKI:def 1;
then Lp.v=0;
hence thesis by A23,A14,A25,A26,A27;
end;
suppose A28: v=p;
L.p>=mm by XXREAL_0:17;
then A29: L.p-mm>=mm-mm by XREAL_1:9;
not v in Carrier Lq by A15,A17,A28,TARSKI:def 1;
then Lq.v=0;
hence thesis by A19,A25,A28,A29;
end;
suppose A30: v<>p & v<>q;
then not v in Carrier Lq by A17,TARSKI:def 1;
then A31: Lq.v=0;
not v in Carrier Lp by A20,A30,TARSKI:def 1;
then Lp.v=0;
hence thesis by A25,A31,Th62;
end;
end;
Sum(Lp-Lq)=(Sum Lp)-(Sum Lq) by RLVECT_3:4
.=mm*p-Sum Lq by A19,RLVECT_2:32
.=mm*p-mm*q by A14,RLVECT_2:32
.=mm*(p-q) by RLVECT_1:34;
then A32: Sum(Lp-Lq)<>0.V by A23,A16,RLVECT_1:11;
A33: sum(Lp-Lq)=(sum Lp)-sum Lq by Th36
.=mm-sum Lq by A19,A20,Th32
.=mm-mm by A14,A17,Th32
.=0;
A34: for v holds Lpq.v>=0
proof
let v;
A35: Lpq.v=L.v+(Lp-Lq).v by RLVECT_2:def 10
.=L.v+(Lp.v-Lq.v) by RLVECT_2:54;
A36: L.v>=0 by Th62;
per cases;
suppose A37: v=p;
then not v in Carrier Lq by A15,A17,TARSKI:def 1;
then Lpq.v=L.v+(mm-0) by A19,A35,A37;
hence thesis by A23,A36;
end;
suppose A38: v=q;
L.q>=mm by XXREAL_0:17;
then A39: L.q-mm>=mm-mm by XREAL_1:9;
not v in Carrier Lp by A15,A20,A38,TARSKI:def 1;
then Lp.v=0;
hence thesis by A14,A35,A38,A39;
end;
suppose A40: v<>p & v<>q;
then not v in Carrier Lq by A17,TARSKI:def 1;
then A41: Lq.v=0;
not v in Carrier Lp by A20,A40,TARSKI:def 1;
then Lp.v=0;
hence thesis by A35,A41,Th62;
end;
end;
sum L1pq=sum L-sum(Lp-Lq) by Th36
.=1+0 by A33,Th62;
then A42: L1pq is convex by A24,Th62;
then L1pq in ConvexComb(V) by CONVEX3:def 1;
then A43: Sum L1pq in conv(A1) by A3,A42;
sum Lpq=sum L+sum(Lp-Lq) by Th34
.=1+0 by A33,Th62;
then A44: Lpq is convex by A34,Th62;
then Lpq in ConvexComb(V) by CONVEX3:def 1;
then A45: Sum Lpq in conv(A) by A3,A44;
0.V=-0.V;
then -Sum(Lp-Lq)<>0.V by A32;
then Sum L1pq<>x by A5,A22,RLVECT_1:9;
then A46: Sum L1pq in conv(A)\{x} by A43,ZFMISC_1:56;
Sum Lpq=Sum L+Sum(Lp-Lq) by RLVECT_3:1;
then Sum Lpq<>x by A5,A32,RLVECT_1:9;
then A47: Sum Lpq in conv(A)\{x} by A45,ZFMISC_1:56;
(1/2)*Sum Lpq+(1-1/2)*Sum L1pq=(1/2)*(Sum L+Sum(Lp-Lq))+(1/2)*(Sum L+-Sum(Lp-
Lq)) by A22,RLVECT_3:1
.=(1/2)*Sum L+(1/2)*Sum(Lp-Lq)+(1/2)*(Sum L+-Sum(Lp-Lq)) by RLVECT_1:def 5
.=(1/2)*Sum L+(1/2)*Sum(Lp-Lq)+((1/2)*(Sum L)+(1/2)*(-Sum(Lp-Lq))) by
RLVECT_1:def 5
.=(1/2)*Sum L+((1/2)*Sum(Lp-Lq)+((1/2)*(-Sum(Lp-Lq))+(1/2)*(Sum L))) by
RLVECT_1:def 3
.=(1/2)*Sum L+((1/2)*Sum(Lp-Lq)+(1/2)*(-Sum(Lp-Lq))+(1/2)*(Sum L)) by
RLVECT_1:def 3
.=(1/2)*Sum L+((1/2)*(Sum(Lp-Lq)+(-Sum(Lp-Lq)))+(1/2)*(Sum L)) by
RLVECT_1:def 5
.=(1/2)*Sum L+((1/2)*0.V+(1/2)*(Sum L)) by RLVECT_1:5
.=(1/2)*Sum L+(0.V+(1/2)*(Sum L))
.=(1/2)*Sum L+(1/2)*(Sum L)
.=(1/2+1/2)*Sum L by RLVECT_1:def 6
.=Sum L by RLVECT_1:def 8;
then Sum L in (conv A)\{x} by A2,A46,A47;
hence contradiction by A5,ZFMISC_1:56;
end;
theorem Th67:
Affin conv A = Affin A
proof
thus Affin conv A c=Affin A by Th51,Th65;
let x be object;
assume x in Affin A;
then x in {Sum L where L is Linear_Combination of A:sum L=1} by Th59;
then consider L be Linear_Combination of A such that
A1: x=Sum L and
A2: sum L=1;
reconsider K=L as Linear_Combination of conv A by Lm1,RLVECT_2:21;
Sum K in {Sum M where M is Linear_Combination of conv A:sum M=1} by A2;
hence thesis by A1,Th59;
end;
theorem
conv A c= conv B implies Affin A c= Affin B
proof
Affin conv A=Affin A & Affin conv B=Affin B by Th67;
hence thesis by Th52;
end;
theorem Th69:
for A,B be Subset of RLS st A c= Affin B holds Affin (A\/B) = Affin B
proof
let A,B be Subset of RLS such that
A1: A c=Affin B;
set AB={C where C is Affine Subset of RLS:A\/B c=C};
B c=Affin B by Lm7;
then A\/B c=Affin B by A1,XBOOLE_1:8;
then Affin B in AB;
then A2: Affin(A\/B)c=Affin B by SETFAM_1:3;
Affin B c=Affin(A\/B) by Th52,XBOOLE_1:7;
hence thesis by A2;
end;
begin :: Barycentric Coordinates
definition
let V;
let A such that
A1: A is affinely-independent;
let x be object such that
A2: x in Affin A;
func x |-- A -> Linear_Combination of A means :Def7:
Sum it = x & sum it = 1;
existence
proof
Affin A={Sum L where L is Linear_Combination of A:sum L=1} by Th59;
hence thesis by A2;
end;
uniqueness
proof
let L1,L2 be Linear_Combination of A such that
A3: Sum L1=x and
A4: sum L1=1 and
A5: Sum L2=x and
A6: sum L2=1;
A7: Sum(L1-L2)=Sum L1-Sum L1 by A3,A5,RLVECT_3:4
.=0.V by RLVECT_1:15;
A8: L1-L2 is Linear_Combination of A by RLVECT_2:56;
sum(L1-L2)=1-1 by A4,A6,Th36
.=0;
then Carrier(L1-L2)={} by A1,A7,A8,Th42;
then ZeroLC(V)=L1+(-L2) by RLVECT_2:def 5;
then A9: -L2=-L1 by RLVECT_2:50;
L1=-(-L1) by RLVECT_2:53;
hence thesis by A9,RLVECT_2:53;
end;
end;
theorem Th70:
v1 in Affin I & v2 in Affin I implies
((1-r)*v1+r*v2) |-- I = (1-r) * (v1|--I) + r * (v2|--I)
proof
assume that
A1: v1 in Affin I and
A2: v2 in Affin I;
set rv12=(1-r)*v1+r*v2;
A3: rv12 in Affin I by A1,A2,RUSUB_4:def 4;
(1-r)*(v1|--I) is Linear_Combination of I & r*(v2|--I) is Linear_Combination
of I by RLVECT_2:44;
then A4: (1-r)*(v1|--I)+r*(v2|--I) is Linear_Combination of I by RLVECT_2:38;
A5: Sum((1-r)*(v1|--I)+r*(v2|--I))=Sum((1-r)*(v1|--I))+Sum(r*(v2|--I)) by
RLVECT_3:1
.=(1-r)*Sum(v1|--I)+Sum(r*(v2|--I)) by RLVECT_3:2
.=(1-r)*Sum(v1|--I)+r*Sum(v2|--I) by RLVECT_3:2
.=(1-r)*v1+r*Sum(v2|--I) by A1,Def7
.=rv12 by A2,Def7;
sum((1-r)*(v1|--I)+r*(v2|--I))=sum((1-r)*(v1|--I))+sum(r*(v2|--I)) by Th34
.=(1-r)*sum(v1|--I)+sum(r*(v2|--I)) by Th35
.=(1-r)*sum(v1|--I)+r*sum(v2|--I) by Th35
.=(1-r)*1+r*sum(v2|--I) by A1,Def7
.=(1-r)*1+r*1 by A2,Def7
.=1;
hence thesis by A3,A4,A5,Def7;
end;
theorem Th71:
x in conv I implies x|--I is convex & 0 <= (x|--I).v & (x|--I).v <= 1
proof
assume A1: x in conv I;
then reconsider I1=I as non empty Subset of V;
conv(I1)={Sum(L) where L is Convex_Combination of I1:L in ConvexComb(V)}
by CONVEX3:5;
then consider L be Convex_Combination of I1 such that
A2: Sum L=x and
L in ConvexComb(V) by A1;
conv I c=Affin I & sum L=1 by Th62,Th65;
then L=x|--I by A1,A2,Def7;
hence thesis by Th62,Th63;
end;
theorem Th72:
x in conv I implies ((x|--I).y = 1 iff x = y & x in I)
proof
assume A1: x in conv I;
then reconsider v=x as Element of V;
hereby assume A2: (x|--I).y=1;
x|--I is convex by A1,Th71;
then A3: Carrier(x|--I)={y} by A2,Th64;
y in {y} by TARSKI:def 1;
then reconsider v=y as Element of V by A3;
conv I c=Affin I by Th65;
hence A4: x=Sum(x|--I) by A1,Def7
.=(x|--I).v*v by A3,RLVECT_2:35
.=y by A2,RLVECT_1:def 8;
Carrier(x|--I)c=I & v in Carrier(x|--I) by A2,RLVECT_2:def 6;
hence x in I by A4;
end;
assume that
A5: x=y and
A6: x in I;
consider L be Linear_Combination of{v} such that
A7: L.v=jj by RLVECT_4:37;
Carrier L c={v} by RLVECT_2:def 6;
then A8: sum L=1 by A7,Th32;
A9: I c=Affin I by Lm7;
{v}c=I by A6,ZFMISC_1:31;
then A10: L is Linear_Combination of I by RLVECT_2:21;
Sum L=1*v by A7,RLVECT_2:32
.=v by RLVECT_1:def 8;
hence thesis by A5,A6,A7,A8,A9,A10,Def7;
end;
theorem Th73:
for I st x in Affin I & for v st v in I holds 0 <= (x|--I).v
holds x in conv I
proof
let I such that
A1: x in Affin I and
A2: for v st v in I holds 0<=(x|--I).v;
set xI=x|--I;
A3: Sum xI=x by A1,Def7;
reconsider I1=I as non empty Subset of V by A1;
A4: for v holds 0<=xI.v
proof
let v;
A5: v in Carrier xI or not v in Carrier xI;
Carrier xI c=I by RLVECT_2:def 6;
hence thesis by A2,A5;
end;
sum xI=1 by A1,Def7;
then A6: xI is convex by A4,Th62;
then conv(I1)={Sum(L) where L is Convex_Combination of I1:L in ConvexComb(V)}
& xI in ConvexComb(V) by CONVEX3:5,def 1;
hence thesis by A3,A6;
end;
theorem
x in I implies (conv I)\{x} is convex
proof
assume A1: x in I;
then reconsider X=x as Element of V;
A2: conv I c=Affin I by Th65;
now let v1,v2,r;
set rv12=r*v1+(1-r)*v2;
assume that
A3: 0<r and
A4: r<1;
assume that
A5: v1 in (conv I)\{x} and
A6: v2 in (conv I)\{x};
A7: 1-r>1-1 by A4,XREAL_1:10;
A8: v2 in conv I by A6,ZFMISC_1:56;
then A9: (v2|--I).X<=1 by Th71;
A10: v1 in conv I by A5,ZFMISC_1:56;
then A11: (v1|--I).X<=1 by Th71;
A12: rv12|--I=(1-r)*(v2|--I)+r*(v1|--I) by A2,A10,A8,Th70;
A13: now let w;
assume w in I;
A14: (rv12|--I).w=((1-r)*(v2|--I)).w+(r*(v1|--I)).w by A12,RLVECT_2:def 10
.=(1-r)*((v2|--I).w)+(r*(v1|--I)).w by RLVECT_2:def 11
.=(1-r)*((v2|--I).w)+r*((v1|--I).w) by RLVECT_2:def 11;
(v2|--I).w>=0 & (v1|--I).w>=0 by A10,A8,Th71;
hence 0<=(rv12|--I).w by A3,A7,A14;
end;
rv12 in Affin I by A2,A10,A8,RUSUB_4:def 4;
then A15: rv12 in conv I by A13,Th73;
v1<>x by A5,ZFMISC_1:56;
then (v1|--I).X<>1 by A10,Th72;
then (v1|--I).X<1 by A11,XXREAL_0:1;
then A16: r*((v1|--I).X)<r*1 by A3,XREAL_1:68;
v2<>x by A6,ZFMISC_1:56;
then (v2|--I).X<>1 by A8,Th72;
then (v2|--I).X<1 by A9,XXREAL_0:1;
then (1-r)*((v2|--I).X)<(1-r)*1 by A7,XREAL_1:68;
then A17: (1-r)*((v2|--I).X)+r*((v1|--I).X)<(1-r)*1+r*1 by A16,XREAL_1:8;
(rv12|--I).X=((1-r)*(v2|--I)).X+(r*(v1|--I)).X by A12,RLVECT_2:def 10
.=(1-r)*((v2|--I).X)+(r*(v1|--I)).X by RLVECT_2:def 11
.=(1-r)*((v2|--I).X)+r*((v1|--I).X) by RLVECT_2:def 11;
then rv12<>X by A1,A15,A17,Th72;
hence rv12 in (conv I)\{x} by A15,ZFMISC_1:56;
end;
hence thesis;
end;
theorem Th75:
for B st x in Affin I & for y st y in B holds (x|--I).y = 0
holds x in Affin(I\B) & x |-- I = x |-- (I\B)
proof
let B such that
A1: x in Affin I and
A2: for y st y in B holds(x|--I).y=0;
A3: Affin I={Sum L where L is Linear_Combination of I:sum L=1} by Th59;
A4: I\B is affinely-independent by Th43,XBOOLE_1:36;
consider L be Linear_Combination of I such that
A5: x=Sum L & sum L=1 by A1,A3;
A6: x|--I=L by A1,A5,Def7;
Carrier L c=I\B
proof
A7: Carrier L c=I by RLVECT_2:def 6;
let y be object such that
A8: y in Carrier L;
assume not y in I\B;
then y in B by A7,A8,XBOOLE_0:def 5;
then L.y=0 by A2,A6;
hence thesis by A8,RLVECT_2:19;
end;
then A9: L is Linear_Combination of I\B by RLVECT_2:def 6;
then x in {Sum K where K is Linear_Combination of I\B:sum K=1} by A5;
then x in Affin(I\B) by Th59;
hence thesis by A4,A5,A6,A9,Def7;
end;
theorem
for B st x in conv I & for y st y in B holds (x|--I).y = 0
holds x in conv (I\B)
proof
let B such that
A1: x in conv I and
A2: for y st y in B holds(x|--I).y=0;
set IB=I\B;
A3: conv I c=Affin I by Th65;
then x|--I=x|--IB by A1,A2,Th75;
then A4: for v st v in IB holds 0<=(x|--IB).v by A1,Th71;
A5: IB is affinely-independent by Th43,XBOOLE_1:36;
x in Affin IB by A1,A2,A3,Th75;
hence thesis by A4,A5,Th73;
end;
theorem Th77:
B c= I & x in Affin B implies x |-- B = x |-- I
proof
assume that
A1: B c=I and
A2: x in Affin B;
B is affinely-independent by A1,Th43;
then A3: Sum(x|--B)=x & sum(x|--B)=1 by A2,Def7;
Affin B c=Affin I & x|--B is Linear_Combination of I by A1,Th52,RLVECT_2:21;
hence thesis by A2,A3,Def7;
end;
theorem Th78:
v1 in Affin A & v2 in Affin A & r+s = 1 implies r*v1 + s*v2 in Affin A
proof
A1: Affin A={Sum L where L is Linear_Combination of A:sum L=1} by Th59;
assume v1 in Affin A;
then consider L1 be Linear_Combination of A such that
A2: v1=Sum L1 and
A3: sum L1=1 by A1;
assume v2 in Affin A;
then consider L2 be Linear_Combination of A such that
A4: v2=Sum L2 and
A5: sum L2=1 by A1;
A6: Sum(r*L1+s*L2)=Sum(r*L1)+Sum(s*L2) by RLVECT_3:1
.=r*Sum L1+Sum(s*L2) by RLVECT_3:2
.=r*v1+s*v2 by A2,A4,RLVECT_3:2;
r*L1 is Linear_Combination of A & s*L2 is Linear_Combination of A by
RLVECT_2:44;
then A7: r*L1+s*L2 is Linear_Combination of A by RLVECT_2:38;
assume A8: r+s=1;
sum(r*L1+s*L2)=sum(r*L1)+sum(s*L2) by Th34
.=r*sum L1+sum(s*L2) by Th35
.=r*1+s*1 by A3,A5,Th35
.=1 by A8;
hence thesis by A1,A7,A6;
end;
theorem Th79:
for A,B be finite Subset of V st
A is affinely-independent & Affin A c= Affin B
holds card A <= card B
proof
let A,B be finite Subset of V such that
A1: A is affinely-independent and
A2: Affin A c=Affin B;
per cases;
suppose A is empty;
hence thesis;
end;
suppose A is non empty;
then consider p be object such that
A3: p in A;
reconsider p as Element of V by A3;
A4: A c=Affin A by Lm7;
then A5: p in Affin A by A3;
set LA=Lin(-p+A);
A6: card A=card(-p+A) by Th7;
{}V c=B;
then consider Ib be affinely-independent Subset of V such that
{}V c=Ib and
A7: Ib c=B and
A8: Affin Ib=Affin B by Th60;
Ib is non empty by A2,A3,A8;
then consider q be object such that
A9: q in Ib;
reconsider q as Element of V by A9;
set LI=Lin(-q+Ib);
A10: card Ib=card(-q+Ib) by Th7;
-q+Ib c=the carrier of LI
proof
let x be object;
assume x in -q+Ib;
then x in LI by RLVECT_3:15;
hence thesis;
end;
then reconsider qI=-q+Ib as finite Subset of LI by A7,A10;
-q+q=0.V by RLVECT_1:5;
then A11: 0.V in qI by A9;
then A12: Lin(-q+Ib)=Lin(((-q+Ib)\{0.V})\/{0.V}) by ZFMISC_1:116
.=Lin(-q+Ib\{0.V}) by Lm9
.=Lin(qI\{0.V}) by RLVECT_5:20;
A13: -q+Ib\{0.V} is linearly-independent by A9,Th41;
then qI\{0.V} is linearly-independent by RLVECT_5:15;
then qI\{0.V} is Basis of LI by A12,RLVECT_3:def 3;
then reconsider LI as finite-dimensional Subspace of V by RLVECT_5:def 1;
A14: Ib c=Affin Ib by Lm7;
then A15: Affin Ib=q+Up LI by A9,Th57;
A16: Affin A=p+Up LA by A3,A4,Th57;
-p+A c=the carrier of LI
proof
let x be object;
A17: 2+-1=1;
2*q+(-1)*p=(1+1)*q+(-1)*p .=1*q+1*q+(-1)*p by RLVECT_1:def 6
.=1*q+1*q+-p by RLVECT_1:16
.=1*q+q+-p by RLVECT_1:def 8
.=q+q+-p by RLVECT_1:def 8
.=q+q-p by RLVECT_1:def 11
.=(q-p)+q by RLVECT_1:28;
then q+Up LI=q+LI & (q-p)+q in q+Up LI by A2,A8,A5,A9,A14,A15,A17,Th78,
RUSUB_4:30;
then A18: q-p in LI by RLSUB_1:61;
assume x in -p+A;
then A19: x in LA by RLVECT_3:15;
then x in V by RLSUB_1:9;
then reconsider w=x as Element of V;
w in Up LA by A19;
then p+w in Affin A by A16;
then p+w in q+Up LI by A2,A8,A15;
then consider u be Element of V such that
A20: p+w=q+u and
A21: u in Up LI;
A22: w=q+u-p by A20,RLVECT_4:1
.=q+(u-p) by RLVECT_1:28
.=u-(p-q) by RLVECT_1:29
.=u+-(p-q) by RLVECT_1:def 11
.=u+(q+-p) by RLVECT_1:33
.=u+(q-p) by RLVECT_1:def 11;
u in LI by A21;
then w in LI by A18,A22,RLSUB_1:20;
hence thesis;
end;
then reconsider LA as Subspace of LI by RLVECT_5:19;
-p+A c=the carrier of LA
proof
let x be object;
assume x in -p+A;
then x in LA by RLVECT_3:15;
hence thesis;
end;
then reconsider pA=-p+A as finite Subset of LA by A6;
-p+p=0.V by RLVECT_1:5;
then A23: 0.V in pA by A3;
then A24: {0.V}c=pA by ZFMISC_1:31;
A25: -p+A\{0.V} is linearly-independent by A1,A3,Th41;
A26: card{0.V}=1 by CARD_1:30;
A27: {0.V}c=qI by A11,ZFMISC_1:31;
A28: dim LI=card(qI\{0.V}) by A13,A12,RLVECT_5:15,29
.=card qI-1 by A27,A26,CARD_2:44;
Lin(-p+A)=Lin(((-p+A)\{0.V})\/{0.V}) by A23,ZFMISC_1:116
.=Lin(-p+A\{0.V}) by Lm9
.=Lin(pA\{0.V}) by RLVECT_5:20;
then dim LA=card(pA\{0.V}) by A25,RLVECT_5:15,29
.=card A-1 by A6,A26,A24,CARD_2:44;
then card A-1<=card qI-1 by A28,RLVECT_5:28;
then A29: card A<=card qI by XREAL_1:9;
card qI<=card B by A7,A10,NAT_1:43;
hence thesis by A29,XXREAL_0:2;
end;
end;
theorem
for A,B be finite Subset of V st
A is affinely-independent & Affin A c= Affin B & card A = card B
holds B is affinely-independent
proof
let A,B be finite Subset of V such that
A1: A is affinely-independent & Affin A c=Affin B & card A=card B;
{}V c=B;
then consider Ib be affinely-independent Subset of V such that
{}V c=Ib and
A2: Ib c=B and
A3: Affin Ib=Affin B by Th60;
reconsider IB=Ib as finite Subset of V by A2;
A4: card IB<=card B by A3,Th79;
card B<=card IB by A1,A3,Th79;
hence thesis by A2,A4,CARD_2:102,XXREAL_0:1;
end;
theorem
L1.v <> L2.v implies ((r*L1+(1-r)*L2).v = s iff r = (L2.v-s)/(L2.v-L1.v))
proof
set u1=L1.v,u2=L2.v;
A1: (r*L1+(1-r)*L2).v=(r*L1).v+((1-r)*L2).v by RLVECT_2:def 10
.=r*u1+((1-r)*L2).v by RLVECT_2:def 11
.=r*u1+(-r+1)*u2 by RLVECT_2:def 11
.=r*(u1-u2)+u2;
assume A2: u1<>u2;
then A3: u1-u2<>0;
A4: u2-u1<>0 by A2;
hereby assume(r*L1+(1-r)*L2).v=s;
then r*(u2-u1)=(u2-s)*1 by A1;
then r/1=(u2-s)/(u2-u1) by A4,XCMPLX_1:94;
hence r=(u2-s)/(u2-u1);
end;
assume r=(u2-s)/(u2-u1);
hence (r*L1+(1-r)*L2).v=(u2-s)/(-(u1-u2))*(u1-u2)+u2 by A1
.=(-(u2-s))/(u1-u2)*(u1-u2)+u2 by XCMPLX_1:192
.=-(u2-s)+u2 by A3,XCMPLX_1:87
.=s;
end;
theorem
A\/{v} is affinely-independent iff
A is affinely-independent & (v in A or not v in Affin A)
proof
set Av=A\/{v};
v in {v} by TARSKI:def 1;
then A1: v in Av by XBOOLE_0:def 3;
A2: A c=Av by XBOOLE_1:7;
hereby assume A3: Av is affinely-independent;
hence A is affinely-independent by Th43,XBOOLE_1:7;
v in Affin A implies v in A
proof
assume v in Affin A;
then {v}c=Affin A by ZFMISC_1:31;
then Affin Av=Affin A by Th69;
hence thesis by A2,A1,A3,Th58;
end;
hence v in A or not v in Affin A;
end;
assume that
A4: A is affinely-independent and
A5: v in A or not v in Affin A;
per cases by A5;
suppose v in A;
hence thesis by A4,ZFMISC_1:40;
end;
suppose A6: not v in Affin A & not v in A;
consider I be affinely-independent Subset of V such that
A7: A c=I and
A8: I c=Av and
A9: Affin I=Affin Av by A2,A4,Th60;
assume A10: not Av is affinely-independent;
not v in I
proof
assume v in I;
then {v}c=I by ZFMISC_1:31;
hence contradiction by A7,A10,Th43,XBOOLE_1:8;
end;
then A11: I c=Av\{v} by A8,ZFMISC_1:34;
A12: Av c=Affin Av by Lm7;
Av\{v}=A by A6,ZFMISC_1:117;
then I=A by A7,A11;
hence contradiction by A1,A6,A9,A12;
end;
end;
theorem
not w in Affin A & v1 in A & v2 in A & r<>1 & r*w + (1-r)*v1 = s*w + (1-s)*v2
implies r = s & v1 = v2
proof
assume that
A1: (not w in Affin A) & v1 in A & v2 in A and
A2: r<>1 and
A3: r*w+(1-r)*v1=s*w+(1-s)*v2;
r*w=r*w+0.V
.=r*w+((1-r)*v1-(1-r)*v1) by RLVECT_1:15
.=(s*w+(1-s)*v2)-(1-r)*v1 by A3,RLVECT_1:28
.=((1-s)*v2-(1-r)*v1)+s*w by RLVECT_1:28;
then r*w-s*w=(1-s)*v2-(1-r)*v1 by RLVECT_4:1;
then A4: (r-s)*w=(1-s)*v2-(1-r)*v1 by RLVECT_1:35;
A5: A c=Affin A by Lm7;
per cases;
suppose r<>s;
then A6: r-s<>0;
A7: 1/(r-s)*(1-s)=(r-s-(r-1))/(r-s) by XCMPLX_1:99
.=(r-s)/(r-s)-(r-1)/(r-s) by XCMPLX_1:120
.=1-(r-1)/(r-s) by A6,XCMPLX_1:60;
A8: -(1/(r-s)*(1-r))=-((1-r)/(r-s)) by XCMPLX_1:99
.=(-(1-r))/(r-s) by XCMPLX_1:187;
1=(r-s)*(1/(r-s)) by A6,XCMPLX_1:106;
then w=(1/(r-s)*(r-s))*w by RLVECT_1:def 8
.=1/(r-s)*((r-s)*w) by RLVECT_1:def 7
.=1/(r-s)*((1-s)*v2)-1/(r-s)*((1-r)*v1) by A4,RLVECT_1:34
.=(1/(r-s)*(1-s))*v2-1/(r-s)*((1-r)*v1) by RLVECT_1:def 7
.=(1/(r-s)*(1-s))*v2-(1/(r-s)*(1-r))*v1 by RLVECT_1:def 7
.=(1/(r-s)*(1-s))*v2+-(1/(r-s)*(1-r))*v1 by RLVECT_1:def 11
.=(1-(r-1)/(r-s))*v2+((r-1)/(r-s))*v1 by A7,A8,RLVECT_4:3;
hence thesis by A1,A5,RUSUB_4:def 4;
end;
suppose A9: r=s;
A10: 1-r<>0 by A2;
(1-r)*v1=(1-r)*v2 by A3,A9,RLVECT_1:8;
hence thesis by A9,A10,RLVECT_1:36;
end;
end;
theorem
v in I & w in Affin I & p in Affin(I\{v}) & w = r*v + (1-r)*p
implies r = (w|--I).v
proof
assume that
A1: v in I and
w in Affin I and
A2: p in Affin(I\{v}) and
A3: w=r*v+(1-r)*p;
A4: I c=conv I by CONVEX1:41;
Carrier(p|--(I\{v}))c=I\{v} by RLVECT_2:def 6;
then not v in Carrier(p|--(I\{v})) by ZFMISC_1:56;
then A5: (p|--(I\{v})).v=0;
I\{v}c=I by XBOOLE_1:36;
then Affin(I\{v})c=Affin I & I c=Affin I by Lm7,Th52;
hence (w|--I).v=((1-r)*(p|--I)+r*(v|--I)).v by A1,A2,A3,Th70
.=((1-r)*(p|--I)).v+(r*(v|--I)).v by RLVECT_2:def 10
.=((1-r)*(p|--I)).v+r*((v|--I).v) by RLVECT_2:def 11
.=(1-r)*((p|--I).v)+r*((v|--I).v) by RLVECT_2:def 11
.=(1-r)*((p|--I).v)+r*1 by A1,A4,Th72
.=(1-r)*((p|--(I\{v})).v)+r*1 by A2,Th77,XBOOLE_1:36
.=r by A5;
end;
| {"subset_name": "curated", "file": "formal/mizar/rlaffin1.miz"} |
TITLE: Bayes Rule answer verification
QUESTION [1 upvotes]: A screening test has a 90% chance of registering breast cancer if it
exists, as well as a 20% chance of falsely registering cancer when it
does not exist. About one in one hundred women requesting the
screening test end up diagnosed with breast cancer. Ms. X has just been
told that her screening test was positive. What is the probability that
she has breast cancer?
What is the probability that Ms. X has breast cancer?
– A. About 1%.
– B. About 90%.
– C. About 85%.
– D. About 80%.
– E. About 4%.
P(BC) = .9 --> has breast cancer
P(BC') = .1 --> doesn't have breast cancer
P(FC) = .8 --> falsely registering breast cancer
p(FC') = .2 --> not falsely registering cancer)
My answer (below) is dead wrong, I would like to know what I did wrong. Thanks for taking the time to read.
(.9)(.8)/(.9)(.1) + (.8)(.2) = 2.88
REPLY [2 votes]: She was tested and registered positive. What is the probability that she has it given that the test is positive?
Bayes' Rule and the Law of Total Probability say:
$$\mathsf P(\operatorname{Has}\mid \operatorname{Positive})~{=\dfrac{\mathsf P(\operatorname{Has})~\mathsf P(\operatorname{Positive}\mid \operatorname{Has})}{\mathsf P(\operatorname{Has})~\mathsf P(\operatorname{Positive}\mid \operatorname{Has})+\mathsf P(\operatorname{HasNot})~\mathsf P(\operatorname{Positive}\mid \operatorname{HasNot})} \\ =\dfrac{(0.01)(0.90)}{(0.01)(0.90)+(0.99)(0.20)}}$$
You had the right form: you used the wrong data. That one in one hundred have it means that ninety nine in one hundred do not have it. $90\%$ of the former test positive; and $20\%$ of the later do too. | {"set_name": "stack_exchange", "score": 1, "question_id": 2497525} |
TITLE: System of differential equations $x'=f(y-x),\, y'=f(x-y)$
QUESTION [3 upvotes]: I have a problem regarding differential equations, $f$ is increasing. $x,y$ are dependent on $t$
$$ f(0)=0 $$
$$x'=f(y-x)$$
$$y'=f(x-y)$$
$$x(0)=1$$
$$y(0)=0$$
prove that when $x,y$ are the solutions of this equation then $$ \lim_{t \to \infty} x = \lim_{t \to\infty}y $$
So far I have managed to see that $x'(0)=f(-1)$ and $y'(0)=f(1)$ which implies that $x'(0)<0$ and $y'(0)>0$ and basically $x'(t)<0$, $y'(t)>0$ as $x>y$ so as $f$ is increasing and $x$ is declining with $t$ and $y$ is increasing with $t$, the derivatives of $x$ and $y$ will be declining and increasing respectivly and they will have the boundary. Can anyone help from this on?
REPLY [1 votes]: First we note, that $\lim_{t\rightarrow \infty} x = x_\infty$ indeed exists. This is because $x$ is decreasing monotone. On the other hand $y_\infty$ exists aswell. We have $x_\infty \geq y_\infty$, because if there is a $t_0$ with $x(t_0)=y(t_0)$ we have $x'(t_0)=y'(t_0)$ and this will be $x_\infty$.
Now assume that $x_\infty >y_\infty$. The genral solution of a differential equation is:
$$x(t) = x(0)+ \int_0^t f(x(t)-y(t)dt$$
But we then would have
$$\lim_{t\rightarrow \infty} x(t) =x(0) +\int_0^t f(y-x)dt \leq\lim_{t\rightarrow \infty} x(0) +\int_0^t f(y_\infty-x_\infty)dt =-\infty $$
This however contradicts that $x_\infty>y_\infty$ | {"set_name": "stack_exchange", "score": 3, "question_id": 2815692} |
\section{Fractional Derivatives and their Approximations} \label{meth:frac_derivatives}
\subsection{Caputo fractional derivative}
The concept of fractional calculus started with questions about about the generalization of integral and differential operators by L'Hospital and Leibniz~\cite{ross1977development} from the set of integers to the set of real numbers.
Subsequently, many prominent mathematicians focused on fractional calculus (for reviews, see, Ross~\cite{ross1977development} and Machado~\cite{machado2011recent}).
Within the field, many different definitions for fractional differential and integral operators of arbitrary order have been introduced~\cite{podlubny1998fractional}.
In this work, we focus on the Caputo definition where the fractional derivative, $ D_t^\alpha $ (with $ \alpha > 0 $), of a $ n $-times differentiable function $ f $ can be written as,
\begin{equation} \label{eq_caputo_def}
D_t^\alpha f = \frac{1}{\Gamma( \lceil \alpha \rceil - \alpha)} \int_0^t \frac{ f^{\lceil \alpha \rceil} (s)}{(t-s)^{1+\alpha - \lceil \alpha \rceil}} \dif s,
\end{equation}
where $ \lceil \alpha \rceil $ denotes the ceiling of $ \alpha $.
The fractional derivative (for $ \alpha \notin \mathbb{N}$), unlike integer derivatives, is a convolution of the past behavior of the function, making it particularly useful for constitutive modeling of viscoelastic materials.
The Caputo form (which can be related to other formulations like the Riemann-Louiville fractional derivative) has the advantage that
\begin{equation} \label{eq_caputo_def2}
D_t^\alpha c = 0, \quad \text{ for any constant } c \in \mathbb{R}.
\end{equation}
For later analysis, we will instead assume that the functions $ f $ are continuous and $ n+1 $-times differentiable~\cite{diethelm2005algorithms}, which allows the use integration by parts to rewrite the Caputo derivative as
\begin{equation}
D_t^\alpha f = \frac{1}{\Gamma(1+n-\alpha)} \left (
t^{n-\alpha} f^n (0) + \int_0^t (t-s)^{n-\alpha} f^{n+1} (s) \dif s
\right).
\end{equation}
This form eliminates the weak singularity in the integrand of Eq.~\eqref{eq_caputo_def}, which is convenient in our analysis.
\subsection{Current methods for numerical approximation of the Caputo derivative}\label{sect:current_frac_meths}
Fractional derivative operators and methods for approximating their effects are well studied in literature.
For some recent review of this subject, readers may consider the works of Zeid \cite{zeid2019approximation}, Guo \cite{guo2015numerical}, Weilbeer \cite{weilbeer2005efficient} and Podlubny \cite{podlubny1998fractional}.
Approaches in the literature can be categorized by their method of integration, either cumulative or recursive.
Cumulative approaches approximate the Caputo derivative by directly numerically approximating the integrals in Eq.~\eqref{eq_caputo_def} or~\eqref{eq_caputo_def2} using some form of a weighted sum over a set of discrete time points.
In contrast, recursive approaches first approximate the Caputo derivative by transforming it into a series of ordinary differential equations that have standard discretizations.
Suppose we seek to approximate the $ \alpha^{th}$-order fractional derivative, $ D_t^\alpha f $ ($0 < \alpha < 1 $), of a function $ f : [0,T] \to \mathbb{R} $ at a series of $N_T$ times points
$$
0=t_0 < t_1 < \cdots < t_{N_T} = T, \quad \text{ where } t_n = n \Delta_t,
$$
$ \Delta_t$ denotes the time step size, and that $ f_n = f(t_n) $ is easily computed given some prior information from previous time points.
Considering cumulative approaches, a straightforward method to computing Caputo derivative is through a Riemann sum evaluated using the midpoint point (MP) rule, i.e.
\begin{equation}\label{eqn:basiccaputo}
\text{D}_{n,\text{MP}}^\alpha f = \frac{1}{\Gamma(1-\alpha)} \sum_{i=1}^n \frac{1}{\left(t-(i-\frac{1}{2})\Delta_t\right)^\alpha}
(f_{k} - f_{k-1}).
\end{equation}
Here the integral in Eq.~\eqref{eq_caputo_def} is approximated at the midpoint of each time interval with the derivative computed by central difference.
The weights of integration decays temporally.
Other cumulative methods differ by the choice of weights to improve accuracy or achieve higher order convergence.
Another common method for constructing the fractional order derivative is using the Gr\"unwald-Letnikov (GL) derivative~\cite{Scherer2011}, which extends on the standard finite difference definition for the derivative of functions.
Using the definition from Scherer \emph{et al.}~\cite{Scherer2011}, the GL derivative with corrections to match the Caputo derivative is
\begin{equation}\label{eq_gl}
\text{D}_{n,\text{GL}}^\alpha f =
\frac{1}{\Delta_t^\alpha} \left [
f_{n} - \sum_{m=1}^n c_m^\alpha f_{n-m}
\right]
- \frac{t_n^{-\alpha}}{\Gamma(1-\alpha)} f_0,
\end{equation}
where $ C_m^\alpha = C_{m-1}^\alpha (1 - [\alpha+1]/ m) $, $ C_1^\alpha = \alpha $, is the recursive definition of the real numbered binomial coefficient in the finite difference formula.
Another example is given by Diethelm \textit{et al.} \cite{diethelm1997generalized, diethelm2005algorithms} where the trapezoidal rule was employed, i.e.
\begin{equation} \label{eqn:diethelgrunwald}
\begin{aligned}
\text{D}_{n,D}^\alpha f =& \left[\frac{\Delta_t^{-\alpha}}{\Gamma(2-\alpha)}\right] \sum_{m=0}^n a_{m,n} \left(f_{n-m} - \sum_{k=0}^{\lceil\alpha\rceil}\left[\frac{(n-m)^k \Delta_t^k}{k!}\right]f_0^{(k)}\right),\\
a_{m,n} =&
\begin{cases}
1, &\text{if} \ m = 0,\\
(m+1)^{1-\alpha} - 2m^{1-\alpha}+(m-1)^{1-\alpha}, &\text{if} \ 0<m<n,\\
(1-\alpha)n^{-\alpha} - n^{1-\alpha}+(n-1)^{1-\alpha}, &\text{if} \ m=n,
\end{cases}
\end{aligned}
\end{equation}
where $a_{m,n}$ are the weights for the past values of $f_n$, and $f_0^{(k)}$ is the $k$-th derivative at 0. Richardson interpolation can also be used by selecting a subset for different size of $\Delta_t$ to improve accuracy.
Gao \textit{et al. } \cite{gao2012finite} developed a similar method for solving fractional diffusion equations, \emph{i.e.}
\begin{equation} \label{eqn:gaoderivative}
\begin{aligned}
&\text{D}_{n,\text{Gao}}^\alpha f = \frac{\Delta_t^{-\alpha}}{\Gamma(2-\alpha)}\left[a_0^\alpha f_n - \sum_{k=1}^{n-1}\left( a_{k-i-1}^\alpha - a_{k-i}^\alpha \right)f_i - a_{k-1}^\alpha f_0\right], \\
&\quad a_i^\alpha = (i+1)^{1-\alpha} - i^{1-\alpha}, \quad i \geq 1.
\end{aligned}
\end{equation}
Other cumulative methods have been presented by Murio \textit{et al.} \cite{murio2008implicit}, Yang \textit{et al.} \cite{yang2010numerical} as well as Liu et al. \cite{liu2007stability}, though many more have been published in the literature.
All the cumulative methods presented here require significant storage costs and have significant computational costs, especially for complex three dimensional tissue simulations.
Storage/memory costs are typically $ \mathcal{O}(N_T) $ and the computational complexity is often $ \mathcal{O}(N_T^2) $, or by best reports $ \mathcal{O}(N_T \log(N_T))$, due to the need to sum over the entire time domain.
In contrasting, the computational cost of transient elasticity problems is independent of the number of time steps.
One strategy to cope with this limitation is to exploit the fact that the integration weights for numerical approximation of fractional derivatives exhibit a 'fading memory', where the weights tend to decay with the number of time steps (or length of time).
Hence, some have proposed truncating the approximation, including only the last few weights.
However, the accuracy of this approach is fundamentally dependent on the behavior of $ f $ and the fractional derivative order $ \alpha $.
For example, the weights decay at slower rates for lower $ \alpha $ values, requiring more and more terms to avoid inaccuracies due to truncation.
More difficult is the dependence on the function $ f $, which is not known \emph{a priori} in computational simulations.
A classic example is to consider a stress relaxation experiment, where an acute action occurring at the beginning of the experiment dictates the response over the entire time course.
Another strategy to reduce computational costs is by use of recursive methods.
Such methods mostly follow a similar approach~\cite{zopf2015comparison}.
Namely, the Caputo derivative is first approximated using decaying exponentials (or, equivalently, a Prony-series).
This approximation step is posited differently depending on the authors. However, a natural rationale for this approximation stems from the relationship between the relaxation spectrum of the Caputo derivative and decaying exponentials via the Laplace transform, i.e.
\begin{equation}\label{eq_frac_trans}
\frac{(t-s)^{-\alpha }}{\Gamma(1-\alpha)}
=
\int_0^\infty \left[\frac{\sin(\pi \alpha) z^{\alpha-1} }{\pi} \right] \exp[(s-t)z] \dif z.
\end{equation}
The integral above can be approximated by summation at a set of points $z$ with the associated quadrature weights (in the bracketed term).
Inserting into Eq.~\eqref{eq_caputo_def}, we arrive at an equivalent form for a Prony-series comprised of decaying exponentials that can be recast as a series of ordinary differential equations (shown in the following section).
One of the earlier works presenting a recursive method for approximating the Caputo derivative was by Yuan-Agarwal~\cite{yuan2002numerical}.
In this case, authors recast the integral based on the above and used Gauss-Laguerre quadrature for integration.
Though they presented the idea in the context of a specific fractional differential equation, the idea was clearly generalizable.
The Yuan-Agrawal method has been critized for slow convergence for certain values of $\alpha$. Investigation by Diethelm \cite{diethelm2008investigation} found the integrand of the the Yuan-Agrawal integrand method to be non-smooth at 0 for $\alpha \neq 1/2 + n$, contrary to the assumption made in the Gauss-Laguerre quadrature. This results in poor convergence for $\alpha\to0$ and $\alpha\to\infty$.
Instead, Diethelm suggested is his method \cite{diethelm2009improvement} that Gauss-Laguerre be replaced by Gauss Jacobi integration by mapping the range of integration from $ [0, \infty)$ to $ [-1,1]$ through a change of variables.
Aiming for further improvements in the accuracy to these methods, Birk and Song \cite{birk2010improved} further extended the Diethelm method \cite{diethelm2009improvement} by altering the mapping onto the $ [-1,1]$ interval and considering the Fourier transform of the fractional operator, showing generally comparable or superior results.
Although the quadrature scheme of these approaches covers the full range of frequency, this is not necessarily an advantage. The range of relevant frequencies can be limited based on the corresponding problem and by the size of $\Delta_t$ used for simulation.
Moreover, as $ \alpha \to 1 $, the approximation must tend to approach the first order derivative, which becomes increasingly challenging to approximate.
In the following sections, we expand on this Prony-based approach by formulating the error estimates, generalizing its form and proposing a new approach for optimizing parameters of the weights and time constants of the associated Prony series.
\subsection{Prony approximation of the Caputo derivative}
\label{meth:frac_approx}
The Prony series approach has been used a number of times in literature~\cite{yuan2002numerical,birk2010improved,peter2013generalized, potts2010parameter,diethelm2008investigation,diethelm2009improvement}.
Suppose that the fractional derivative can by approximated using $ N $ Maxwell elements in parallel, each with their own weight $ \beta_k $ and time constant $ \tau_k $ ($k=1 \ldots N $).
Then a Prony-based approximation $ \hat{D}_t^{\alpha} f $ to $ D_t^\alpha f$ can be written as,
\begin{equation}\label{eq_Prony_approx}
\hat{D}_t^{\alpha} f
:=
\beta_0 f^\prime(t)
+
\sum_{k=1}^N \int_0^t \beta_k \exp \left[ \frac{s-t}{\tau_k} \right] f^\prime (s) \dif s,
\end{equation}
or equivalently as
\begin{equation}\label{eq_Prony_approx_series}
\hat{D}_t^{\alpha} f
:=
\beta_0 f^\prime(t) + \sum_{k=1}^N q_k (t),
\quad
q_k^\prime(t) + \frac{1}{\tau_k} q_k(t) = \beta_k f^\prime(t),
\end{equation}
where we introduce $ N $ intermediate variables $ q_k $.
While Eq.~\eqref{eq_Prony_approx} requires further approximation of the integral over the time domain, Eq.~\eqref{eq_Prony_approx_series} enables discretization of the intermediate variables $ q_k $.
Eq.~\eqref{eq_Prony_approx_series} requires $2N+1 $ parameters, $ \bs{\theta}_\alpha = \{\tau_1, \ldots, \tau_N, \beta_0, \beta_1,\ldots, \beta_N \} $.
The approximation error of equations~\eqref{eq_Prony_approx} and~\eqref{eq_Prony_approx_series} is shown in Lemma~\ref{lem_Prony_error}:
\begin{lem}\label{lem_Prony_error}
Let $ f: [0,T] \to \mathbb{R} $ be a real function on the domain [0,T]. If $ f^\prime(0) < \infty $ and $ f^{\prime \prime} \in L^1(0,T) $, then
\begin{equation*}
(D_t^\alpha f - \hat{D}_t^\alpha f) = \varepsilon (t) f^\prime(0) + \int_0^t \varepsilon(z) f^{\prime \prime} (t-z) \dif z,
\end{equation*}
and for any $ t \in [0,T] $
\begin{equation*}
\left | D_t^\alpha f - \hat{D}_t^\alpha f \right |
\le
\| \epsilon \|_{0,\infty}
\left [
| f^\prime (0) | +
\| f^{\prime \prime} \|_{0,1}
\right ],
\end{equation*}
where the truncation error $ \varepsilon $ is defined as
\begin{equation} \label{eq_Prony_error}
\varepsilon(z) := \frac{z^{1-\alpha}}{\Gamma(2-\alpha)} - \beta_0 + \sum_{k=1}^N \beta_k \tau_k ( \exp(-z / \tau_k ) - 1).
\end{equation}
\end{lem}
\begin{pol}{\ref{lem_Prony_error}}
Applying integration by parts to Eq.~\eqref{eq_Prony_approx}, and the fundamental theorem of calculus to the leading first order derivative term,
the Prony approximation to the fractional derivative can be re-written as,
\begin{equation*}
\hat{D}_t^\alpha f = \kappa(0,t) f^\prime (0)
+
\int_0^t \kappa(s,t) f^{\prime \prime} (s) ds,
\end{equation*}
where $ \kappa(s,t) = \beta_0 - \sum_{k=1}^N \beta_k \tau_k ( \exp([s-t]/\tau_k) - 1 ) $.
Subtracting this from Eq.~\eqref{eq_caputo_def2} lead to
\begin{eqnarray}
(D_t^\alpha f - \hat{D}_t^\alpha f)
& = &
\frac{1}{\Gamma(2-\alpha)} \left (
t^{1-\alpha} f^\prime (0) + \int_0^t (t-s)^{1-\alpha} f^{\prime \prime} (s) \dif s
\right)
-
\kappa(0,t) f^\prime (0)
-
\int_0^t \kappa(s,t) f^{\prime \prime} (s) \dif s
\nonumber \\
& = &
\left( \frac{t^{1-\alpha}}{\Gamma(2-\alpha)} - \kappa(0,t) \right) f^\prime (0)
+
\int_0^t \left ( \frac{(t-s)^{1-\alpha}}{\Gamma(2-\alpha)} - \kappa(s,t) \right) f^{\prime \prime} (s) \dif s
\nonumber \\
& = &
\left( \frac{t^{1-\alpha}}{\Gamma(2-\alpha)} - \kappa(0,t) \right) f^\prime (0)
+
\int_0^t \left ( \frac{z^{1-\alpha}}{\Gamma(2-\alpha)} - \kappa(0,z) \right) f^{\prime \prime} (t-z) \dif z.
\nonumber
\end{eqnarray}
We deduce that the error in the approximation can be written in terms of $ \varepsilon(t) = t^{1-\alpha} / \Gamma(2-\alpha) - \kappa(0,t) $.
Hence,
\begin{eqnarray}
| D_t^\alpha f - \hat{D}_t^\alpha f |
& = &
\left | \varepsilon(t) f^\prime (0)
+
\int_0^t \varepsilon(z) f^{\prime \prime} (t-z) dz \right |
\nonumber \\
& \le &
| \varepsilon(t) f^\prime (0) |
+
\| \varepsilon \|_{0,\infty} \int_0^t | f^{\prime \prime} (t-z) | dz
\nonumber \\
& \le &
\| \epsilon \|_{0,\infty}
\left [
| f^\prime (0) | +
\| f^{\prime \prime} \|_{0,1}
\right ],
\nonumber
\end{eqnarray}
where $ \| \cdot \|_{0,\infty} $ and $ \| \cdot \|_{0,1} $ denote the $ L^\infty(0,T) $ and $ L^1(0,T) $ norms, respectively.
\end{pol}
\begin{rmk} \label{rem_l2error}
Assuming $ f^\prime(0) = 0 $ and $ f^{\prime \prime} \in L^2(0,T)$, then
$$
\left | D_t^\alpha f - \hat{D}_t^\alpha f \right |
\le
\| \epsilon \|_{0}
\| f^{\prime \prime} \|_{0},
$$
where $ \| \cdot \|_0 $ is the $ L^2(0,T) $ norm.
\end{rmk}
From Lemma~\ref{lem_Prony_error}, we observe that the error incurred, when re-writing the fractional derivative in terms of a finite series of Maxwell elements, is governed by $ \varepsilon $ as well as the behavior of the differentiated function $ f^{\prime\prime} $.
While for forward simulations in the time domain, the behavior of $ f $ is unknown, we can understand the impact of our approximation by directly evaluating $ \varepsilon $ over the time domain $ (0,T) $.
\subsection{Discrete Prony approximation of the Caputo derivative}
\label{meth:frac_approx_disc}
The approximation shown in Eq.~\eqref{eq_Prony_approx_series} lends itself to a straightforward discretized form by applying numerical updates to the intermediate variables $ q_k $.
We now approximate the first derivative using backward Euler and the integral term using a midpoint approximation, i.e.
\begin{equation}\label{eq_prony_approx_final}
\hat{\text{D}}_n^{\alpha} f
:=
\frac{\beta_0}{\Delta_t} \left( f^n - f^{n-1} \right) + \sum_{k=1}^N q_k^n,
\quad
q_k^n = e_k^2 q_k^{n-1} + e_k \beta_k \left( f^n - f^{n-1} \right),
\end{equation}
where $ e_k = \exp[ - \Delta_t / (2\tau_k)] $.
With this definition, we can examine the discretization error between the discrete operator $ \hat{\text{D}}_n^\alpha $ and its continuous counterpart $ \hat{D}_t^\alpha $, as shown in Lemma~\ref{lem_disc_Prony_error}.
\begin{lem}\label{lem_disc_Prony_error}
Let $ f: [0,T] \to \mathbb{R} $ be a real function in the time domain [0,T] and let $ f \in W^{3,\infty} (0,T) $.
Assume the time interval $ [0,T] $ is divided into equal divisions $ \Delta_t $, giving discrete time steps $ \{ t_1, \ldots, t_{N_T} \} $ with $ t_k = k \Delta_t $ and $ T = N_T \Delta_t $.
The value $ f^n = f(t_n) $ and the value $ q_k^n $ is our approximation to $ q_k(t_n) $ based on Eq.~\eqref{eq_prony_approx_final}.
Here we assume $ q_k^0 = q_k(0) $ for all $ k = 1, \ldots, N $, and that $ \beta_k, \tau_k \in \mathbb{R}^+ $.
Then for any $ t_n \in \{ t_1, \ldots t_N \} $
\begin{equation*}
| \hat{D}_{t_n}^\alpha f - \hat{\textnormal{D}}_n^\alpha f | \le \Delta_t \left ( \beta_0 / 2 + C(\bs{\beta},\bs{\tau}) \Delta_t \right ) \| f \|_{W^{3,\infty}(0,T)},
\end{equation*}
where $ C(\bs{\beta},\bs{\tau}) > 0 $ is a constant depending on the chosen $ \beta_k, \tau_k $ for all $ k = 1, \ldots, N $.
\end{lem}
\begin{pol}{\ref{lem_disc_Prony_error}}
We start by manipulating the ordinary differential equation for $ q_k $ in Eq.~\eqref{eq_Prony_approx_series}.
Applying an integration factor and integrating over a discrete time interval $ [t_{n-1},t_n] $, we can see that
\begin{equation*}
q_k(t_n) = \exp\left[ -\frac{\Delta_t}{\tau_k} \right] q_k(t_{n-1}) + \int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) \dif s.
\end{equation*}
Subtracting the discrete approximation shown in Eq.~\eqref{eq_prony_approx_final}, we observe that
\begin{equation} \label{eq_lem2_eq1}
(q_k(t_n) - q_k^n)
=
\exp\left[ -\frac{\Delta_t}{\tau_k} \right] ( q_k(t_{n-1}) - q_k^{n-1} )
+
\int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) \dif s - e_k \beta_k \left( f^n - f^{n-1} \right).
\end{equation}
We recall the result that for $ g \in W^{2,\infty}(0,T) $, one may derive the midpoint relation
\begin{equation}
\int_{t_{n-1}}^{t_{n}} g(s) - g(t_{n-\frac{1}{2}}) \dif s
=
\int_{t_{n-1}}^{t_{n}} \int_{t_{n-\frac{1}{2}}}^{s} (s - z) g^{\prime \prime} (z) \dif z \dif s,
\end{equation}
as well as derive an inequality for the truncation error
\begin{eqnarray}
\int_{t_{n-1}}^{t_{n}} g(s) - g(t_{n-\frac{1}{2}}) \dif s
& = &
\int_{t_{n-\frac{1}{2}}}^{t_{n}} \int_{t_{n-\frac{1}{2}}}^{s} (s - z) g^{\prime \prime} (z) \dif z \dif s
+
\int_{t_{n-1}}^{t_{n-\frac{1}{2}}} \int_{s}^{t_{n-\frac{1}{2}}} (z - s) g^{\prime \prime} (z) \dif z \dif s
\nonumber \\
& \le &
\| g^{\prime \prime} \|_{L^\infty(t_{n-1},t_n)} \left[
\int_{t_{n-\frac{1}{2}}}^{t_{n}} \int_{t_{n-\frac{1}{2}}}^{s} | s - z | \dif z \dif s
+
\int_{t_{n-1}}^{t_{n-\frac{1}{2}}} \int_{s}^{t_{n-\frac{1}{2}}} |z - s| \dif z \dif s
\right]
\nonumber \\
& \le &
\frac{\Delta_t^3}{24} \| g^{\prime \prime} \|_{L^\infty(t_{n-1},t_n)}.
\nonumber
\end{eqnarray}
Utilizing the midpoint relation, the second half of Eq.~\eqref{eq_lem2_eq1} can be written as
\begin{eqnarray}
& &
\int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) \dif s - e_k \beta_k \left( f^n - f^{n-1} \right)
\nonumber \\
& & \hspace{10mm}
=
\int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) - \beta_k e_k f^{\prime}(t_{n-\frac{1}{2}}) \dif s
- e_k \beta_k \left( f^n - f^{n-1} - \Delta_t f^{\prime}(t_{n-\frac{1}{2}}) \right)
\nonumber \\
& & \hspace{10mm}
=
\int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) - \beta_k e_k f^{\prime}(t_{n-\frac{1}{2}}) \dif s
- e_k \beta_k \int_{t_{n-1}}^{t_n} f^\prime(s) - f^{\prime}(t_{n-\frac{1}{2}}) \dif s.
\nonumber \\
\end{eqnarray}
Note that each integral represents the truncation error of a midpoint approximation, i.e.
\begin{eqnarray}
& &
\left | \int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) \dif s - e_k \beta_k \left( f^n - f^{n-1} \right) \right|
\nonumber \\
& & \hspace{10mm}
\le
\frac{\beta_k \Delta_t^3}{24} \left \| \left ( \exp \left [ \frac{s-t_n}{\tau_k} \right ] f^\prime (s) \right)^{\prime \prime} \right \|_{L^\infty(t_{n-1},t_n)}
+ \frac{e_k \beta_k \Delta_t^3}{24} \left \| f^{\prime \prime \prime} \right \|_{L^\infty(t_{n-1},t_n)}
\nonumber \\
& & \hspace{10mm}
\le
\frac{\beta_k \Delta_t^3}{24} \left ( \frac{1}{\tau_k^2} \| f^\prime \|_{L^\infty(t_{n-1},t_n)} + \frac{2}{\tau_k} \| f^{\prime \prime} \|_{L^\infty(t_{n-1},t_n)} + (1+e_k) \| f^{\prime \prime \prime} \|_{L^\infty(t_{n-1},t_n)} \right )
\nonumber \\
\nonumber \\
& & \hspace{10mm}
\le
\Delta_t^3 C_0(\beta_k,\tau_k) \| f \|_{W^{3,\infty}(t_{n-1},t_n)},
\nonumber
\end{eqnarray}
where $ C_0(\beta_k,\tau_k) = (\beta_k / 24) \max \{ \tau_k^{-2}, \tau_k^{-1}, (1+e_k) \} $.
Inserting the truncation error into Eq.~\eqref{eq_lem2_eq1}, we can see that
\begin{eqnarray}
| q_k(t_n) - q_k^n |
& \le &
e_k^2 | q_k(t_{n-1}) - q_k^{n-1} |
+
\left | \int_{t_{n-1}}^{t_n} \beta_k \exp\left[ \frac{s-t_n}{\tau_k} \right] f^{\prime}(s) ds - e_k \beta_k \left( f^n - f^{n-1} \right) \right |
\nonumber \\
& \le &
e_k^2 | q_k(t_{n-1}) - q_k^{n-1} |
+
\Delta_t^3 C_0(\beta_k,\tau_k) \| f \|_{W^{3,\infty}(t_{n-1},t_n)}
\nonumber \\
& \le &
e_k^2 | q_k(0) - q_k^{0} |
+
C_0(\beta_k,\tau_k) \Delta_t^3 \sum_{k=1}^{n} \| f \|_{W^{3,\infty}(t_{k-1},t_k)}
\nonumber \\
& \le &
C_0(\beta_k,\tau_k) T \Delta_t^2 \| f \|_{W^{3,\infty}(0,T)},
\nonumber
\end{eqnarray}
making the update formula for $ q_k $ $ \mathcal{O} (\Delta_t^2) $.
Subtracting the definition of $ \hat{D}_t^\alpha $ from $ \hat{\text{D}}_n^\alpha $, we can see that
\begin{eqnarray}
| \hat{D}_{t_n}^\alpha f - \hat{\text{D}}_n^\alpha f |
& = &
\left |
\beta_0 \left ( f^\prime(t_n) - \Delta_t^{-1} ( f^n - f^{n-1}) \right ) + \sum_{k=1}^N \left( q_k(t_n) - q_k^n \right)
\right |
\nonumber \\
& \le &
\beta_0 | f^\prime(t_n) - \Delta_t^{-1} ( f^n - f^{n-1}) | + \sum_{k=1}^N | q_k(t_n) - q_k^n |
\nonumber \\
& \le &
\beta_0 \left | \frac{1}{\Delta_t} \int_{t_{n-1}}^{t_n} (s-t_{n-1}) f^{\prime \prime}(s) ds \right |
+
C(\bs{\beta},\bs{\tau}) \Delta_t^2 \| f \|_{W^{3,\infty}(0,T)}
\nonumber \\
& \le &
\frac{\beta_0 \Delta_t}{2} \| f^{\prime \prime} \|_{L^{\infty}(t_{n-1},t_n)}
+
C(\bs{\beta},\bs{\tau}) \Delta_t^2 \| f \|_{W^{3,\infty}(0,T)},
\end{eqnarray}
where $ C(\bs{\beta},\bs{\tau}) = \sum_{k=1}^N C_0(\beta_k,\tau_k) $. Generalizing for any time point completes the proof.
\end{pol}
From Lemma~\ref{lem_disc_Prony_error} we observe the typical discretization errors expected for backward Euler and midpoint based error estimates.
The result shows that the discrete approximation converges to the approximate Prony series approximation with $ \mathcal{O} (\Delta_t ) $.
Using backward Euler for the first-order derivative term is not necessary, and higher-order approximations could easily be used in Eq.~\eqref{eq_prony_approx_final}.
Here we select this form for ease and, in the context of biomechanics simulations, for stability considerations.
Combining Lemmas~\ref{lem_Prony_error} and~\ref{lem_disc_Prony_error}, we can derive an error estimate for this Prony series-based approximation in Theorem~\ref{thm_prony_error}.
\begin{thm} \label{thm_prony_error}
\sloppy Given the assumptions of Lemma~\ref{lem_disc_Prony_error}, the error between $ \hat{\textnormal{D}}_n^\alpha f $ and $ D_{t_n}^\alpha f $ for any $ t_n \in \{ t_1, \ldots, t_{N_T} \} $ is given by
$$
| \hat{\textnormal{D}}_n^\alpha f - D_{t_n}^\alpha f | \le
\| \epsilon \|_{0,\infty}
\left [
| f^\prime (0) | +
\| f^{\prime \prime} \|_{0,1}
\right ]
+
\Delta_t \left ( \beta_0 / 2 + C(\bs{\beta},\bs{\tau}) \Delta_t \right ) \| f \|_{W^{3,\infty}(0,T)}.
$$
\end{thm}
\begin{pot}{\ref{thm_prony_error}}
This is a straightforward result of the triangle inequality, Lemma~\ref{lem_Prony_error} and~\ref{lem_disc_Prony_error}.
\end{pot}
Based on this error estimate, we can see that a refinement with $ \Delta_t $ eliminates the portion of error responsible for discretization, while the appropriate selection of the positive constants used in Eq.~\eqref{eq_prony_approx_final} can reduce the Prony approximation error given by the truncation $ \varepsilon $ (as we will later show).
We note that decreasing $ \Delta_t $ will eventually lead to a plateau in the error convergence based on how well the Prony series approximates the true fractional derivative.
However, increasing the number of Prony terms $N$ elicits a more complex response.
For large $ \Delta_t $, the error can become dependent on the second term in Theorem~\ref{thm_prony_error} and, thus, the constant $ C (\bs{\beta},\bs{\tau}) $.
This constant, in general, grows with more Prony terms and can have truncation scale factors that grow larger with larger $ N $.
However, for sufficiently fine $ \Delta_t $, increasing the number of terms shifts the error onto $ \varepsilon $ and thus improves like the truncation error.
\subsection{Optimizing the Numerical approximation $\hat{D}_t^{\alpha} $} \label{meth:frac_opt}
The approximation introduced in Eq.~\eqref{eq_Prony_approx} relies on the effective identification of the parameters $ \bs{\theta}_\alpha $ that should be chosen to ensure optimal accuracy.
Ideally, $ \bs{\theta}_\alpha $ would be selected so as to minimize the error between the true Caputo derivative and the approximation in some suitable norm.
For example, if we consider the $L^2(0,T) $ norm, then we would seek to minimize the squared error
\begin{equation}
F(f,\bs{\theta}_\alpha) = \frac{1}{2} \int_0^T [ \hat{D}_t^\alpha f - D_t^\alpha f ]^2 \; \dif t,
\end{equation}
where $ F (f, \bs{\theta}_\alpha) $ gives the squared $ L^2 $-norm error between the approximate and the true Caputo derivatives for the given function $ f $ and set of parameters $ \bs{\theta}_\alpha $.
While the parameters $ \bs{\theta}_\alpha $ could be chosen to minimize $ F(f, \bs{\theta}_\alpha ) $ this approach presents challenges as we often do not know $ f $ prior to simulation.
Hence, instead, we must select our approximation to be suitably valid for all $ f $, \emph{i.e.} we find $ \bs{\theta}_\alpha $ such that (for $ 0 < \alpha < 1 $),
\begin{equation} \label{eq_min_principle}
F(f,\bs{\theta}_\alpha,\alpha) := \left \{ \min F(f,\bs{\varphi},\alpha), \; \bs{\varphi} \in \mathbb{R}^{2N+1} \right \}, \quad \text{ for any } f \in L^\infty(0,T) .
\end{equation}
The problem posed in Eq.~\eqref{eq_min_principle} is generally challenging to solve due to the flexibility in the behavior of $ f $.
In order to approximately minimize $ F$, suppose we instead consider only those functions $ f \in \mathcal{F}(M) $ where
\begin{equation}
\mathcal{F}(M) = \left \{ f \in L^2(0,T) \; \Big | \; f(t) = \text{Re} \left \{ \dsum_{k=1}^M C_k \exp ( i \omega_k t ) \right \}, \; \text{for some} \; C_0, \ldots, C_M \in \mathbb{E} \right \}
\end{equation}
represents the set of functions that can be written using a Fourier basis~\cite{tseng2000computation}. Here we note that $ \omega_k = k \omega^\star $, where $ \omega^\star = 2 \pi / T $ is the base frequency of $ f $ on the time interval $ [0,T] $. In this context, our function $f $ and its fractional derivative (for $ t >> 0 $) can be written as
\begin{equation}
f(t) = \text{Re} \left \{ \sum_{k=1}^M C_k \exp ( i \omega_k t ) \right \},
\quad
D_t^\alpha f(t) \approx \text{Re} \left \{ \sum_{k=1}^M C_k (i \omega_k)^\alpha \exp ( i \omega_k t ) \right \}.
\end{equation}
In contrast, the approximate fractional derivative $ \hat{D}_t^\alpha $ based on Eq.~\eqref{eq_Prony_approx} can be written as,
\begin{equation}
\hat{D}_t^\alpha f(t) = \text{Re} \left \{ \sum_{k=1}^M C_k \left( \beta_0 i \omega_k + \sum_{m=1}^N \beta_m (\tau_m \omega_k) \frac{(\tau_m \omega_k + i)}{(\tau_m \omega_k)^2+1} \right) \exp \{ i \omega_k t \} \right \}.
\end{equation}
The difference between the true and approximate operators is then simply,
\begin{equation}
\hat{D}_t^\alpha f(t) - D_t^\alpha f(t) \approx
\text{Re} \left \{ \sum_{k=1}^M C_k \exp ( i \omega_k t ) \left [ \beta_0 i \omega_k + \sum_{m=1}^N \beta_m (\tau_m \omega_k) \frac{(\tau_m \omega_k + i)}{(\tau_m \omega_k)^2+1} - (i \omega_k)^\alpha \right] \right \},
\end{equation}
leaving the operator error dependent on the size of the complex number given in the brackets.
This means that for us to obtain a good approximation for any $ f \in \mathcal{F}(M) $, we must choose $ \bs{\theta}_\alpha $ such that we minimize the bracketed term.
Hence, we may reduce the minimization problem in Eq.~\eqref{eq_min_principle} to an analogous normalised system where
\begin{equation} \label{eq_algebraic_constraints}
\frac{1}{k^\alpha} \sum_{m=1}^N \hat{\beta}_m \frac{(k \hat{\tau}_m)^2}{(k \hat{\tau}_m)^2+1} = \cos \left( \frac{\pi \alpha}{2} \right),
\quad
\hat{\beta}_0 k^{1-\alpha}
- \frac{1}{k^\alpha} \sum_{m=1}^N \hat{\beta}_m \frac{k \hat{\tau}_m}{(k \hat{\tau}_m)^2+1} = \sin \left( \frac{\pi \alpha}{2} \right),
\end{equation}
for $ k = 1, \ldots M $.
Here, we note that parameters were normalized (with hats) to enable independence from the base frequency $ \omega^\star $, with $ \beta_0 = \hat{\beta}_0 (\omega^\star)^{\alpha-1} $, $ \beta_m = \hat{\beta}_m (\omega^\star)^\alpha $ and $ \tau_m = \hat{\tau}_m / \omega^\star $ ($ m = 1, \ldots N $).
This leads to $2M $ constraint equations that, if hold, imply equivalence of the true and approximate Caputo derivative.
Assuming that our function is composed of many more Fourier terms than the number of Prony terms (i.e. $ M >> N $), Eq.~\eqref{eq_algebraic_constraints} leads to a nonlinear over-constrained system that can be solved by optimization.
\subsection{Numerical implementation} \label{meth:implementation}
Resulting parameters for approximating different fractional derivatives can be found in the MATLAB~\cite{matlab2018version} codes provided as supplementary material.
Here optimized parameters can be returned for values of $ N \in [3,15] $ and $ \alpha \in [0,1] $ for a specified base frequency $ \omega^\star $.
Codes and examples are also provided showing how the approximation can be utilized for approximating the Caputo derivative.
Other methods, such as the midpoint (MP) or Gr{\"{u}}nwald-Letnikov (GL) approximations, are also provided. | {"config": "arxiv", "file": "1910.02141/Content/2-methods-caputo.tex"} |
TITLE: orbits of automorphism group for indefinite lattices
QUESTION [13 upvotes]: I have a question about indefinite lattices.
QUESTION: Let $\Lambda\times\Lambda\rightarrow {\Bbb Z}$ be a lattice,
that is, ${\Bbb Z}^n$ with a non-degenerate integer quadratic form,
not necessarily unimodular, and $G:=O(\Lambda)$ the group of
(integer) isometries. Denote the set of all vectors $v\in\Lambda$ such that $v^2=r$ by $S_r$. I think that it is true (under some
additional assumptions on rank) that $G$ acts on $S_r$ with finitely
many orbits, but I don't know a good reference.
In this paper we have an argument proving this
for $r=0$: http://arxiv.org/abs/1208.4626
(Theorem 3.6), when the rank of a lattice
is $\geq 7$.
For unimodular lattices I think there is just
one orbit ("Eichler's theorem"), probably for rank $\geq 5$.
I would be very grateful for any reference
to this result in bigger generality, with
arbitrary $r$ and without unnecessary rank
restrictions.
The question comes from complex geometry: if $M$ is a hyperkahler manifold, there is a canonical non-unimodular integer quadratic form in $H^2(M)$, and its automorphisms are identified (up to finite index) with the mapping class group of $M$. Various geometric questions about $M$ are translated into lattice-theoretic questions about this lattice.
REPLY [1 votes]: There's a simple criterion for equivalence of two vectors in a lattice if the lattice contains two copies of the hyperbolic plane: you look at the length of the vector and its image in the discriminant group of the lattice. Gritsenko, Hulek and Sankaran mention it here http://arxiv.org/abs/0810.1614 but the result goes back to Eichler (they refer to it as the Eichler criterion). If you're working with the Beauville form, then all known Hyperkahlers satisfy the hyperbolic condition. | {"set_name": "stack_exchange", "score": 13, "question_id": 145672} |
TITLE: Complex integration with Cauchy formula II
QUESTION [0 upvotes]: I want to compute the following integral:
$$\int_{C^{+}} \frac{\text{Log}(z)}{(z+1)(z+2)^2}dz$$
where $C^{+}$ is the circle with center $z=-2$ and radius $3/2$.
I suppose that I have to use Cauchy integral formula but I am lost.
REPLY [0 votes]: Hint: Use partial fraction decomposition to get that
$$\int_{C^+} \frac{\log z}{(z+1)(z+2)^2}\:dz = \int_{C^+} \frac{\log z}{z+1}\:dz - \int_{C^+} \frac{\log z}{z+2}\:dz - \int_{C^+} \frac{\log z}{(z+2)^2}\:dz$$
Now what can you do for each individual piece? (Amazingly, this answer will be independent of choice of branch cut) | {"set_name": "stack_exchange", "score": 0, "question_id": 3654823} |
TITLE: the pseudometric induced by a measure
QUESTION [5 upvotes]: Let $(X, \Sigma, \mu)$ be a measure space.
We can define a pseudometric $d$ on $\Sigma$ in the following way:
$$d(A, B) = \mu(A\bigtriangleup{}B)$$
where $A\bigtriangleup{}B = (A\cup{}B)\setminus{}(A\cap{}B)$ is the symmetric difference.
Does this metric have a name? What does the topology induced by $d$ on $\Sigma$ tell us about the measure?
REPLY [1 votes]: Separability of this spaces gives separability of the $L^p$ spaces, $1\le p < \infty$. | {"set_name": "stack_exchange", "score": 5, "question_id": 44171} |
TITLE: Integral $\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx$
QUESTION [20 upvotes]: Last year I wondered about this integral:$$\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx$$
That is because it looks very similar to this integral
and this one. Surprisingly the result is quite nice and an approach can be found here.
$$\boxed{\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx=\frac{\sqrt{2}\pi(5\pi^2+12\pi\ln 2 - 12\ln^22)}{96}}$$
Although the approach there is quite skillful, I believed that an elementary approach can be found for this integral.
Here is my idea. First we will consider the following two integrals: $$I=\int_0^\frac{\pi}{2} x^2\sqrt{\tan x}\,\mathrm dx \,;\quad J=\int_0^\frac{\pi}{2} x^2\sqrt{\cot x}\,\mathrm dx$$
$$\Rightarrow I=\frac12 \left((I-J)+(I+J)\right)$$
Thust we need to evaluate the sum and the difference of those two from above.
I also saw from here that the "sister" integral differs only by a minus sign: $$\boxed{\int_0^\frac{\pi}{2} x^2\sqrt{\cot x}\,\mathrm dx=\frac{\sqrt{2}\pi(5\pi^2-12\pi\ln 2 - 12\ln^22)}{96}}$$
Thus using those two boxed answer we expect to find: $$I-J=\frac{\pi^2 \ln 2}{2\sqrt 2};\quad I+J=\frac{5\pi^3}{24\sqrt 2}-\frac{\pi \ln^2 2}{2\sqrt 2}\tag1$$
$$I-J=\int_0^\frac{\pi}{2} x^2\left(\sqrt{\tan x}-\sqrt{\cot x}\right)\,\mathrm dx=\sqrt 2\int_0^\frac{\pi}{2} x^2 \cdot \frac{\sin x-\cos x}{\sqrt{\sin (2x)}}dx$$
$$=-\sqrt 2\int_0^\frac{\pi}{2} x^2 \left(\operatorname{arccosh}(\sin x+\cos x) \right)'dx=2\sqrt 2 \int_0^\frac{\pi}{2} x\operatorname{arccosh} (\sin x+\cos x)dx$$
Let us also denote the last integral with $I_1$ and do a $\frac{\pi}{2}-x=x$ substitution:
$$I_1=\int_0^\frac{\pi}{2} x\operatorname{arccosh} (\sin x+\cos x)dx=\int_0^\frac{\pi}{2} \left(\frac{\pi}{2}-x\right)\operatorname{arccosh} (\sin x+\cos x)dx$$
$$2I_1=\frac{\pi}{2} \int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx\Rightarrow I-J=\frac{\pi}{\sqrt 2}\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx$$
By using $(1)$ we can easily deduce that: $$\bbox[10pt,#000, border:2px solid green ]{\color{orange}{\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx=\frac{\pi}{2}\ln 2}}$$
Doing something similar for $I+J$ we get:
$$I+J=\int_0^\frac{\pi}{2} x^2\left(\sqrt{\tan x}+\sqrt{\cot x}\right)\,\mathrm dx=\sqrt 2\int_0^\frac{\pi}{2} x^2 \cdot \frac{\sin x+\cos x}{\sqrt{\sin (2x)}}dx$$
$$=\sqrt 2 \int_0^\frac{\pi}{2} x^2 \left( \arcsin \left(\sin x-\cos x\right)\right)'dx=\frac{\pi^3 \sqrt 2}{8}-2\sqrt 2 \int_0^\frac{\pi}{2} x \arcsin \left(\sin x-\cos x\right)dx$$
Unfortunately, we're not lucky this time and the substitution used for $I-J$ doesn't help in this case.
Of course using $(1)$ we can again deduce that:
$$\bbox[10pt,#000, border:2px solid green ]{\color{red}{\int_0^\frac{\pi}{2} x \arcsin \left(\sin x-\cos x\right)dx=\frac{\pi^3}{96}+\frac{\pi}{8}\ln^2 2}}$$
In the meantime I found a way for the first one, mainly using: $$\frac{\arctan x}{x}=\int_0^1 \frac{dy}{1+x^2y^2}$$ Let us denote: $$I_1=\int_0^\frac{\pi}{2} \operatorname{arccosh} (\sin x+\cos x)dx\overset{IBP}= \int_0^\frac{\pi}{2} x \cdot \frac{\sin x-\cos x}{\sqrt{\sin(2x)}}dx$$
$$\overset{\tan x\rightarrow x}=\frac{1}{\sqrt 2}\int_0^\infty \frac{\arctan x}{1+x^2}\frac{x-1}{\sqrt x}dx=\frac1{\sqrt 2}\int_0^\infty \int_0^1 \frac{dy}{1+x^2y^2} \frac{\sqrt x(x-1)}{1+x^2}dx$$
$$=\frac1{\sqrt 2}\int_0^1 \int_0^\infty \frac{1}{1+y^2x^2} \frac{\sqrt x(x-1)}{1+x^2} dxdy$$
$$=\frac{1}{\sqrt 2}\int_0^1 \frac{{\pi}}{\sqrt 2}\left(\frac{2}{y^2-1}-\frac{1}{\sqrt y (y^2-1)}-\frac{\sqrt y}{y^2-1}\right)dy=\frac{\pi}{2}\ln 2$$
Although the integral in the third row looks quite unpleasant, it can be done quite elementary.
Sadly a similar approach for the second one is madness, because we would have:
$$I_2=\int_0^1 \int_0^1 \int_0^\infty \frac{\sqrt x (x+1)}{1+x^2}\frac{1}{1+y^2x^2}\frac{1}{1+z^2x^2} dxdydz$$
But atleast it gives hope that an elementary approach exists.
For this question I would like to see an elementary approach (without relying on special functions) for the second integral (red one).
If possible please avoid contour integration, although this might be
included in elementary.
REPLY [11 votes]: On the path of Zacky, the missing part...
Let,
\begin{align}I&=\int_0^{\frac{\pi}{2}}x^2\sqrt{\tan x}\,dx\\
J&=\int_0^{\frac{\pi}{2}}\frac{x^2}{\sqrt{\tan x}}\,dx\\
\end{align}
Perform the change of variable $y=\sqrt{\tan x}$,
\begin{align}I&=\int_0^{\infty}\frac{2x^2\arctan^2\left(x^2\right)}{1+x^4}\,dx\\\\
J&=\int_0^{\infty}\frac{2x^2\arctan^2\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
\end{align}
\begin{align}
\text{I+J}&=\int_0^{\infty}\frac{2x^2\left(\arctan\left(x^2\right)+\arctan\left(\frac{1}{x^2}\right)\right)^2}{1+x^4}\,dx-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
&=\frac{\pi^2}{4}\int_0^{\infty}\frac{2x^2}{1+x^4}\,dx-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
\end{align}
Perform the change of variable $y=\dfrac{1}{x}$,
\begin{align}
\text{K}&=\int_0^{\infty}\frac{2x^2}{1+x^4}\,dx\\
&=\int_0^{\infty}\frac{2}{1+x^4}\,dx\\
\end{align}
Therefore,
\begin{align}
\text{2K}=\int_0^{\infty}\frac{2\left(1+\frac{1}{x^2}\right)}{\left(x-\frac{1}{x}\right)^2+2}\,dx
\end{align}
Perform the change of variable $y=x-\dfrac{1}{x}$,
\begin{align}\text{2K}&=2\int_{-\infty}^{+\infty}\frac{1}{2+x^2}\,dx\\
&=2\left[\frac{1}{\sqrt{2}}\arctan\left(\frac{x}{\sqrt{2}}\right)\right]_{-\infty}^{+\infty}\\
&=2\times \frac{\pi}{\sqrt{2}}
\end{align}
therefore,
\begin{align}
\text{I+J}&=\frac{\pi^3}{4\sqrt{2}}-4\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
\end{align}
Let $a>0$,
\begin{align}
\text{K}_1(a)&=\int_0^{\infty}\frac{x^2}{a+x^4}\,dx\\
&=\frac{1}{a}\int_0^{\infty}\frac{x^2}{1+\left(a^{-\frac{1}{4}}x\right)^4}\,dx\\
\end{align}
Perform the change of variable $y=a^{-\frac{1}{4}}x$,
\begin{align}
\text{K}_1(a)&=a^{-\frac{1}{4}}\int_0^{\infty}\frac{x^2}{1+x^4}\,dx\\
&=\frac{a^{-\frac{1}{4}}\pi}{2\sqrt{2}}
\end{align}
In the same manner,
\begin{align}
\text{K}_2(a)&=\int_0^{\infty}\frac{x^2}{1+ax^4}\,dx\\
&=\frac{a^{-\frac{3}{4}}\pi}{2\sqrt{2}}
\end{align}
Since, for $a$ real,
\begin{align}\arctan a=\int_0^1 \frac{a}{1+a^2t^2}\,dt\end{align}
then,
\begin{align}\text{L}&=\int_0^{\infty}\frac{x^2\arctan\left(x^2\right)\arctan\left(\frac{1}{x^2}\right)}{1+x^4}\,dx\\
&=\int_0^{\infty}\left(\int_0^1 \int_0^1 \frac{x^2}{(1+u^2x^4)\left(1+\frac{v^2}{x^4}\right)(1+x^4)}\,du\,dv\right)\,dx\\
&=\\
&\int_0^{\infty}\left(\int_0^1\int_0^1 \left(\frac{x^2}{(1-u^2)(1-v^2)(1+x^4)}-\frac{x^2}{1-u^2v^2}\left(\frac{u^2}{(1-u^2)(1+u^2x^4)}+\frac{v^2}{(1-v^2)(v^2+x^4)}\right)
\right)dudv\right)dx\\
&=\int_0^1\int_0^1 \left(\frac{\pi}{2\sqrt{2}(1-u^2)(1-v^2)}-\frac{1}{1-u^2v^2}\left(\frac{u^2\text{K}_2(u^2)}{1-u^2}+\frac{v^2\text{K}_1(v^2)}{1-v^2}\right)\right)dudv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\int_0^1 \left(\frac{1}{(1-u^2)(1-v^2)}-\frac{1}{(1-u^2v^2)}\left(\frac{u^{\frac{1}{2}}}{1-u^2}+\frac{v^{\frac{3}{2}}}{1-v^2}\right)\right)dudv\\
&=\pi\int_0^1\left[\frac{\sqrt{v}\left(\text{ arctanh}\left(\sqrt{uv}\right)-\text{ arctan}\left(\sqrt{uv}\right)-\text{ arctanh}\left(uv\right)\right)+\arctan\left(\sqrt{u}\right)+\ln\left(\frac{\sqrt{1+u}}{1+\sqrt{u}}\right)}{2\sqrt{2}(1-v^2)}\right]_{u=0}^{u=1}\,dv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\big(\text{ arctanh}\left(\sqrt{v}\right)-\text{ arctan}\left(\sqrt{v}\right)-\text{ arctanh}\left(v\right)\big)+\frac{\pi}{4}-\frac{1}{2}\ln 2}{1-v^2}\,dv\\
&=\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\arctan\left(\frac{1-\sqrt{v}}{1+\sqrt{v}}\right)}{1-v^2}\,dv+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right)\int_0^1 \frac{1-\sqrt{v}}{1-v^2}\,dv+\\
&\frac{\pi}{2\sqrt{2}}\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+\sqrt{v}}{2}\right)}{1-v^2}\,dv-\frac{\pi}{4\sqrt{2}}\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+v}{2}\right)}{1-v^2}\,dv
\end{align}
Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,
\begin{align}\text{R}_1&=\int_0^1\frac{\sqrt{v}\arctan\left(\frac{1-\sqrt{v}}{1+\sqrt{v}}\right)}{1-v^2}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\arctan v}{v(1+v^2)}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{\arctan v}{v}\,dv-\int_0^1 \frac{\arctan v}{1+v^2}\,dv\\
&=\frac{1}{2}\text{G}-\frac{1}{2}\Big[\arctan^2 v\Big]_0^1\\
&=\frac{1}{2}\text{G}-\frac{\pi^2}{32}\\
\text{R}_2&=\int_0^1 \frac{1-\sqrt{v}}{1-v^2}\,dv\\
&=\left[\ln\left(\frac{\sqrt{1+v}}{1+\sqrt{v}}\right)+\arctan\left(\sqrt{v}\right)\right]_0^1\\
&=\frac{\pi}{4}-\frac{1}{2}\ln 2\\
\end{align}
Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,
\begin{align}\text{R}_3&=\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+\sqrt{v}}{2}\right)}{1-v^2}\,dv\\
&=-\frac{1}{2}\int_0^1\frac{(1-v)^2\ln(1+v)}{v(1+v^2)}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{1}{2}\int_0^1 \frac{\ln(1+v
)}{v}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{1}{4}\int_0^1 \frac{2v\ln(1-v^2)}{v^2}\,dv+\frac{1}{2}\int_0^1 \frac{\ln(1-v)}{v}\,dv\\
\end{align}
In the second integral perform the change of variable $y=v^2$,
\begin{align}\text{R}_3&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\int_0^1 \frac{\ln(1-v)}{v}\,dv\\
\end{align}
In the second integral perform the change of variable $y=1-v$,
\begin{align}\text{R}_3&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\int_0^1 \frac{\ln v}{1-v}\,dv\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv+\frac{1}{4}\times -\zeta(2)\\
&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv-\frac{\pi^2}{24}\\
\end{align}
Perform the change of variable $y=\dfrac{1-v}{1+v}$,
\begin{align}
\text{S}_1&=\int_0^1\frac{\ln(1+v)}{1+v^2}\,dv\\
&=\int_0^1\frac{\ln(\frac{2}{1+v})}{1+v^2}\,dv\\
&=\ln 2\int_0^1 \frac{1}{1+v^2}\,dv-\text{S}_1\\
&=\frac{\pi}{4}\ln 2-\text{S}_1
\end{align}
Therefore,
\begin{align}
\text{S}_1&=\frac{\pi}{8}\ln 2\\
\text{R}_3&=\frac{\pi}{8}\ln 2-\frac{\pi^2}{24}\\
\end{align}
Perform the change of variable $y=\dfrac{1-\sqrt{v}}{1+\sqrt{v}}$,
\begin{align}
\text{R}_4&=\int_0^1\frac{\sqrt{v}\ln\left(\frac{1+v}{2}\right)}{1-v^2}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\ln\left(\frac{1+v^2}{(1+v)^2}\right)}{v(1+v^2)}\,dv\\
&=\frac{1}{2}\int_0^1 \frac{(1-v)^2\ln\left(1+v^2\right)}{v(1+v^2)}\,dv+2\text{R}_3\\
&=\frac{1}{2}\int_0^1\frac{\ln(1+v^2)}{v}\,dv-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv+\frac{\pi}{4}\ln 2-\frac{\pi^2}{12}\\
&=\frac{1}{2}\times \frac{1}{4}\zeta(2)-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv+\frac{\pi}{4}\ln 2-\frac{\pi^2}{12}\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1\frac{\ln(1+v^2)}{1+v^2}\,dv\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1\int_0^1\frac{v^2}{(1+v^2)(1+v^2t)}\,dt\,dv\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1 \left[\frac{\arctan\left(v\right)\sqrt{t}-\arctan\left(v\sqrt{t}\right)}{(t-1)\sqrt{t}}\right]_{v=0}^{v=1}\,dt\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}-\int_0^1 \frac{\frac{\pi\sqrt{t}}{4}-\arctan\left(\sqrt{t}\right)}{(t-1)\sqrt{t}}\,dt\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}+\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\int_0^1 \frac{\sqrt{t}-1}{(t-1)\sqrt{t}}\,dt\\
&=\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}+\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\Big[2\ln\left(1+\sqrt{t}\right)\Big]_0^1\\
&=\int_0^1 \frac{\arctan\left(\frac{1-\sqrt{t}}{1+\sqrt{t}}\right)}{(1-t)\sqrt{t}}\,dt-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
\end{align}
Perform the change of variable $y=\dfrac{1-\sqrt{t}}{1+\sqrt{t}}$,
\begin{align}
\text{R}_4&=\int_0^1 \frac{\arctan t}{t}\,dt-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
&=\text{G}-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\\
\end{align}
Therefore,
\begin{align}L&=\frac{\pi}{2\sqrt{2}}\text{R}_1+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right) \text{R}_2+\frac{\pi}{2\sqrt{2}}\text{R}_3-\frac{\pi}{4\sqrt{2}}\text{R}_4\\
&=\frac{\pi}{2\sqrt{2}}\left(\frac{\text{G}}{2}-\frac{\pi^2}{32}\right)+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{4}-\frac{1}{2}\ln 2\right)^2+\frac{\pi}{2\sqrt{2}}\left(\frac{\pi}{8}\ln 2-\frac{\pi^2}{24}\right)-\\
&\frac{\pi}{4\sqrt{2}}\left(\text{G}-\frac{\pi}{4}\ln 2-\frac{\pi^2}{16}\right)\\
&=\frac{\pi^3}{96\sqrt{2}}+\frac{\pi\ln^2 2}{8\sqrt{2}}
\end{align}
Thus,
\begin{align}\text{I+J}&=\frac{\pi^3}{4\sqrt{2}}-4\text{L}\\
&=\frac{\pi^3}{4\sqrt{2}}-4\left(\frac{\pi^3}{96\sqrt{2}}+\frac{\pi\ln^2 2}{8\sqrt{2}}\right)\\
&=\boxed{\frac{5\pi^3}{24\sqrt{2}}-\frac{\pi\ln^2 2}{2\sqrt{2}}}
\end{align} | {"set_name": "stack_exchange", "score": 20, "question_id": 3071027} |
TITLE: Question about the chain rule in this problem.
QUESTION [3 upvotes]: This is a problem taken from Caluclus Early Transcendentals by James Stewart 7th edition.
My question is about how the chain rule is used to get from $m(dv/dt)$ to $mv(dv/dx)$?
REPLY [3 votes]: Suppose $x$ is a function of time $t$ so that $x=x(t)$.
Under certain conditions, we can write the inverse function $t=t(x)$ (e.g., If $x(t)$ is differentiable with non-zero derivative, $x'(t)\ne 0$, then the inverse function $t(x)$ exists and is also differentiable).
Now, suppose that $v$ is a function of time given by
$$v(t)=\frac{dx}{dt}$$
Then, as a function of $x$, we have from the chain rule that $v(t(x))$ can be written
$$\frac{dv}{dx}=\frac{dv}{dt}\frac{dt}{dx}$$
Solving for $\frac{dv}{dt}$ reveals
$$\begin{align}\frac{dv}{dt}&=\frac{\frac{dv}{dx}}{\frac{dt}{dx}}\\\\
&=\frac{dv}{dx}\frac{dx}{dt}\\\\
&=v\frac{dv}{dx}
\end{align}$$
which was to be shown! In going from the first line to the second, we used the fact that $\frac{dy}{dx}=1/\frac{dx}{dy}$ whenever $\frac{dx}{dy}\ne 0$. | {"set_name": "stack_exchange", "score": 3, "question_id": 1238417} |
\begin{document}
\title{Message Passing Algorithms for Phase Noise Tracking Using
Tikhonov Mixtures}
\author{Shachar~Shayovitz,~\IEEEmembership{Student~Member,~IEEE,}
and~Dan~Raphaeli,~\IEEEmembership{Member,~IEEE}
\thanks{S. Shayovitz and D. Raphaeli are with the Department
of EE-Systems, Tel Aviv University, Tel Aviv,
Israel, e-mail: [email protected],[email protected].}}
\markboth{IEEE Transactions on Communications}
{Submitted paper}
\maketitle
\begin{abstract}
In this work, a new low complexity iterative algorithm for decoding data transmitted over strong phase
noise channels is presented. The algorithm is based on the Sum \& Product Algorithm (SPA) with phase noise
messages modeled as Tikhonov mixtures. Since mixture based Bayesian inference such as SPA, creates an
exponential increase in mixture order for consecutive messages, mixture reduction is necessary. We propose
a low complexity mixture reduction algorithm which finds a reduced order mixture whose dissimilarity
metric is mathematically proven to be upper bounded by a given threshold.
As part of the mixture reduction, a new method for optimal clustering provides the closest circular
distribution, in Kullback Leibler sense, to any circular mixture. We further show a method for limiting
the number of tracked components and further complexity reduction approaches. We show simulation results
and complexity analysis for the proposed algorithm and show better performance than other state of the art
low complexity algorithms.
We show that the Tikhonov mixture approximation of SPA messages is equivalent to the tracking of multiple
phase trajectories, or also can be looked as smart multiple phase locked loops (PLL). When the number of
components is limited to one the result is similar to a smart PLL.
\end{abstract}
\begin{keywords}
phase noise, factor graph, Tikhonov, cycle slip, directional
statistics, moment matching,mixture models
\end{keywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{M}any high frequency communication systems operating
today employ low cost upconverters or downconverters which create
phase noise. Phase noise can severely limit the information rate of a communications
system and pose a serious challenge for the detection systems.
Moreover, simple solutions for phase noise tracking such as PLL either
require low phase noise or otherwise require many pilot symbols which
reduce the effective data rate.
In the last decade we have witnessed a significant amount of research
done on joint estimation and decoding of phase noise and coded
information. For example, \cite{barb2005} and \cite{colavolpe2006} which are based on the
factor graph representation of the joint posterior, proposed in \cite{worthen2001} and allows the design
of efficient message passing algorithms which incorporate both the code
graph and the channel graph. The use of LDPC or Turbo decoders, as
part of iterative message passing schemes, allows the receiver to
operate in low SNR regions while requiring less pilot symbols.
In order to perform MAP decoding of the code symbols, the SPA is applied to the factor graph. The SP
algorithm is a message passing algorithm which computes the exact
marginal for each code symbol, provided there are no cycles in the
factor graph. In the case of phase noise channels, the messages related to the phase are continuous, thus
recursive computation of messages requires computation of integrals which have no analytical solution and
the direct application of this algorithm is not feasible.
A possible approximation of MAP detection is to quantize the phase noise and perform an
approximated SP. The channel phase takes only a finite number of
values $L$, thus creating a trellis diagram representing the random
walk. If we suppose a forward - backward scheduling, the SPA reduces to a BCJR run on this trellis
following LDPC decoding. This algorithm (called DP - discrete phase in this
paper) requires large computational resources (large $L$) to reach
high accuracy, rendering it not practical for some real world
applications.
In order to circumvent the problem of continuous messages, many algorithms have resorted to
approximations. In \cite{colavolpe2006}, the algorithm uses channel memory truncation
rather than an explicit representation of the channel parameters. In
\cite{barb2005} section B., an algorithm which efficiently balances
the tradeoff between accuracy and complexity was proposed (called BARB
in this paper). BARB uses Tikhonov distribution parameterizations
(canonical model) for all the SPA messages concerning a phase node.
However, the approximation as defined in \cite{barb2005}, is only good
when the information from the LDPC decoder is good (high reliability).
In the first iteration the approximation is poor, and in fact
exists only for pilot symbols. The LLR messages related to the
received symbols which are not pilots are essentially zero (no
information). This inability to accurately approximate the messages in
the first iterations causes many errors and can create an error floor.
This problem is intensified when using either low code rate or high
code rate. In the first case, it is since the pilots are less
significant, since their energy is reduced. In the second case, the
poor estimation of the symbols far away from the pilots cannot be
overcome by the error correcting capacity of the code.
In order to overcome this limitation, BARB relies on the insertion of
frequent pilots to the transmitted block causing a reduction of the
information rate.
In this paper, a new approach for approximating the phase noise forward and backward messages using
Tikhonov mixtures is proposed. Since SP recursion equations create an exponential increase in the number
of mixture components, a mixture reduction algorithm is needed at each phase message calculation to keep
the mixture order small.
We have tested few state of the art clustering algorithms, and those algorithms failed for this task, and cannot provide proven accuracy. Therefore we have derived a new clustering algorithm. A distinct property of
the new algorithm is its ability to provide adaptive mixture order, while keeping specified accuracy
constraint, where the accuracy is the Kullback Leibler (KL) divergence between the original and the
clustered pdfs. A proof for the accuracy of this mixture reduction algorithm is also presented in this
paper.
We show that the process of hypothesis expansion followed by clustering is equivalent to a sophisticated
tracker which can track most of the multiple hypotheses of possible phase trajectories. Occasionally, the
number of hypotheses grows, and more options for phase trajectories emerge. Each such event causes the
tracker to create another tracking loop. In other occasions, two trajectories are merged into one. We
show, as an approximation, the tracking of each isolated phase trajectory is equivalent to a PLL and a
split event is equivalent to a point in time when a phase slip may happen.
In the second part, we use a limited order Tikhonov mixture. This limitation may cause the tracking
algorithm to lose tracking of the correct phase trajectory, and is analogous to a cycle slip in PLL. We
propose a method to combat these slips with only a slight increase in complexity. The principle operation
of the method is that each time some hypothesis is abandoned, we can calculate the probability of being in
the correct trajectory and we can use this information wisely in the calculation of the messages. We
provide further complexity reduction approaches. One of these approaches is to abandon the clustering
altogether, and replace it by component selection algorithm, which maintains the specified accuracy but
requires more components in return. Now the complexity of clustering is traded against the complexity of
other tasks. Finally, we show simulations results which demonstrate that the proposed scheme's Packet
Error Rate (PER) are comparable to the DP algorithm and that the resulting computational complexity is
much lower than DP and in fact is comparable to the algorithm proposed in \cite{barb2005}.
The reminder of this paper is organized as follows. Section II
introduces the channel model and presents the derivation of the exact
SPA from \cite{barb2005}. In Section III, we introduce the reader to
the directional statistics framework, and some helpful results on the
KL divergence. Section IV presents the mixture order canonical
model and provides a review on mixture reduction algorithms. Section V presents two mixture reduction
algorithms for approximating the SP messages. Section VI presents the computation of LLRs. A complexity
comparison is carried out in Section VII. Finally, in
Section VIII we present some numerical results and in Section IX, we
discuss the results and point out some interesting claims.
\section{System Model}
\label{sec:system_model}
In this section we present the system model used throughout this paper. We assume a sequence of data bits
is encoded using an LDPC code and then mapped to a complex signal constellation $\mathbb{A}$ of size $M$,
resulting in a sequence of complex modulation
symbols $\mathbf{c} = (c_{0},c_{1},...,c_{K-1})$. This sequence is transmitted
over an AWGN channel affected by carrier phase noise. Since we use a long LDPC code, we can assume the
symbols are drawn independency from the constellation. The
discrete-time baseband complex equivalent channel model at the
receiver is given by:
\begin{equation}\label{sys_model}
r_{k} = c_{k}e^{j\theta_{k}}+n_{k} \;\;\;\; k=0,1,...,K-1.
\end{equation}
where $K$ is the length of the transmitted sequence of complex symbols.
The phase noise stochastic model is a Wiener process
\begin{equation}\label{weiner}
\theta_{k} = \theta_{k-1} + \Delta_{k}
\end{equation}
where ${\Delta_{k}}$ is a real, i.i.d gaussian sequence with
$\Delta_{k} \sim \mathcal{N}(0,\sigma_{\Delta}^{2})$ and $\theta_{0}
\sim \mathcal{U}[0,2\pi)$. For the sake of clarity we define pilots as
transmitted symbols which are known to both the transmitter and
receiver and are repeated in the transmitted block every known number
of data symbols. We also define a preamble as a sequence of pilots in
the beginning of a transmitted block. We assume that the transmitted sequence is padded with pilot symbols
in order to bootstrap the algorithms and maintain the tracking.
\subsection{Factor Graphs and the Sum Product Algorithm}
Since we are interested in optimal MAP detection, we will use the framework defined in \cite{worthen2001},
compute the SPA equations and thus perform approximate MAP detection. The factor graph representation of
the joint posterior distribution
was given in \cite{barb2005} and is shown in Fig. \ref{fig:fg}.
\begin{figure}
\centering
\includegraphics[width=8cm]{FG2-eps-converted-to.pdf}\\
\caption{Factor graph representation of the joint posterior
distribution}\label{fig:fg}
\end{figure}
The resulting Sum \& Product messages are computed by
\begin{equation}\label{pf}
p_{f}(\theta_{k}) =
\int_{0}^{2\pi}p_{f}(\theta_{k-1})p_{d}(\theta_{k-1})p_{\Delta}(\theta_{k}-\theta_{k-1})d\theta_{k-1}
\end{equation}
\begin{equation}\label{pb}
p_{b}(\theta_{k}) =
\int_{0}^{2\pi}p_{b}(\theta_{k+1})p_{d}(\theta_{k+1})p_{\Delta}(\theta_{k+1}-\theta_{k})d\theta_{k+1}
\end{equation}
\begin{equation}\label{pd}
p_{d}(\theta_{k}) = \sum_{x \in \mathbb{A}} P_{d}(c_{k}=x) e_{k}(c_{k},\theta_{k})
\end{equation}
\begin{equation}\label{Pu}
P_{u}(c_{k}) =
\int_{0}^{2\pi}p_{f}(\theta_{k})p_{b}(\theta_{k})e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
\begin{equation}\label{fk}
e_{k}(c_{k},\theta_{k}) \propto
\exp\{-\frac{|r_{k}-c_{k}e^{j\theta_{k}}|^{2}}{2\sigma^{2}}\}
\end{equation}
\begin{equation}\label{p_del}
p_{\Delta}(\theta_{k}) =
\sum^{\infty}_{l=-\infty}g(0,\sigma_{\Delta}^{2},\theta_{k}-l2\pi)
\end{equation}
Where $r_{k}$,$P_{d}$, $\sigma^{2}$ and
$g(0,\sigma_{\Delta}^{2},\theta)$ are the
received base band signal, symbol soft information from LDPC decoder,
AWGN variance and Gaussian distribution, respectively. The messages
$p_{f}(\theta_{k})$ and $p_{b}(\theta_{k})$ are called in this paper
the forward and backward phase noise SP messages, respectively.
The detection process starts with the channel section providing the first LLRs ($P_{u}(c_{k})$) to the
LDPC decoder, and so on. A different scheduling could be applied on a general setting, but this will not
be possible with the algorithms in this paper. Due to the fact that the phase symbols are continuous
random variables, a direct implementation of these equations is not possible
and approximations are unavoidable. Assuming enough quantization
levels, the DP algorithm can approximate the above equations as close
as we wish. However, this algorithm requires large computational
resources to reach high accuracy, rendering it not practical for some
real world applications. In \cite{shachar2012},\cite{shachar_multi2012} and \cite{shachar_old2012}, modified Tikhonov
approximations were used for the messages in the SPA which lead to a
very simple and fast algorithm. In this paper, an approximate
inference algorithm is proposed which better balances the tradeoff
between accuracy and complexity for strong phase noise channels.
\section{Preliminaries}
\subsection{Directional Statistics}
\label{sec:pagestyle}
Directional statistics is a branch of mathematics which studies random
variables defined on circles and spheres. For example, the probability
of the wind to blow at a certain direction. The \emph{circular} mean
and variance of a circular random variable $\theta$, are defined in
\cite{mardia2000}, as
\begin{equation}\label{circ_mu}
\mu_{C} = \angle \mathbb{E} (e^{j\theta})
\end{equation}
\begin{equation}\label{circ_var}
\sigma^{2}_{C} = \mathbb{E}(1-cos(\theta-\mu_{C}))
\end{equation}
One can see that for small angle variations around the \emph{circular}
mean, the definition of the \emph{circular} variance coincides with
the standard definition of the variance of a random variable defined
on the real axis, since $1-cos(\theta-\mu_{C}) \approx
(\theta-\mu_{C})^2$.
One of the most commonly used circular distributions is the Tikhonov
distribution and is defined as,
\begin{equation}\label{tikh_define}
g(\theta) = \frac{e^{Re[\kappa_{g}e^{-j(\theta-\mu_{g})}]}}{2\pi
I_{0}(\kappa_{g})}
\end{equation}
According to (\ref{circ_mu}) and (\ref{circ_var}), the \emph{circular}
mean and \emph{circular} variance of a Tikhonov distribution are,
\begin{equation}\label{tikh_mu}
\mu_{C} = \mu_{g}
\end{equation}
\begin{equation}\label{tikh_var}
\sigma^{2}_{C} = 1-\frac{I_{1}(\kappa_{g})}{I_{0}(\kappa_{g})}
\end{equation}
where $I_{0}(x)$ and $I_{1}(x)$ are the modified Bessel function of
the first kind of the zero and first order, respectively.
An alternative formulation for the Tikhonov pdf uses a single complex
parameter $z = \kappa_{g} e^{j\mu_{g}}$
residual phase noise in a first order PLL when the input phase noise is constant is the tikhonov distribtion
\subsection{Circular Mean \& Variance Matching}
\label{ssec:subhead}
In this section we will present a new theorem in directional statistics. The theorem states that the
nearest Tikhonov distribution, $g(\theta)$, to any circular
distribution,$f(\theta)$ (in a Kullback Liebler (KL) sense), has its
circular mean and variance matched to those of the circular
distribution .
The Kullback Liebler (KL) divergence is a common information theoretic
measure of similarity between probability distributions, and is
defined as \cite{KL1951},
\begin{equation}\label{KL1}
D(f||g) \triangleq \int_0^{2\pi}f(\theta)\log
\frac{f(\theta)}{g(\theta)} d\theta
\end{equation}
\begin{mydef}
We define the operator $g(\theta) = \textsf{CMVM}[f(\theta)]$
(Circular Mean and Variance Matching), to take a circular pdf -
$f(\theta)$ and create a Tikhonov pdf $g(\theta)$ with the same
circular mean and variance.
\end{mydef}
\begin{theorem}
\emph{(CMVM):}
\label{mix_tikh_thr}
Let $f(\theta)$ be a circular distribution, then the Tikhonov
distribution $g(\theta)$ which minimizes $D(f||g)$ is,
\begin{equation}\label{CMVM_thr}
g(\theta) = \textsf{CMVM}[f(\theta)]
\end{equation}
\end{theorem}
The proof can be found in appendix \ref{sec:cmvm_thr}.
\subsection{Helpful Results for KL Divergence}
We introduce the reader to three results related to the Kullback-Leibler Divergence which
will prove helpful in the next sections.
\begin{lemma}\label{kl_bound1}
Suppose we have two distributions, $f(\theta)$ and $g(\theta)$,
\[f(\theta) = \sum_{i=1}^{M}\alpha_{i}f_{i}(\theta)\]
\begin{equation}
D_{KL}(\sum_{i=1}^{M}\alpha_{i}f_{i}(\theta) || g(\theta)) \leq
\sum_{i=1}^{M}\alpha_{i}D_{KL}(f_{i}(\theta) || g(\theta))
\end{equation}
\end{lemma}
The proof of this bound can be found in \cite{runnalls2007} and is
based on the Jensen inequality.
\begin{lemma}\label{kl_bound2}
Suppose we have three distributions, $f(\theta)$ ,$g(\theta)$ and
$h(\theta)$. We define the following mixtures,
\begin{equation}\label{11}
f_{1}(\theta) = \alpha f(\theta) + (1-\alpha)g(\theta)
\end{equation}
\begin{equation}\label{12}
f_{2}(\theta) = \alpha f(\theta) + (1-\alpha)h(\theta))
\end{equation}
for $0 \leq \alpha \leq 1$
Then,
\begin{equation}
D_{KL}(f_{1}(\theta) || f_{2}(\theta)) \leq
(1-\alpha)D_{KL}(g(\theta) || h(\theta))
\end{equation}
\end{lemma}
The proof for this identity can also be found in \cite{runnalls2007}.
\begin{lemma}\label{kl_bound3}
Suppose we have two mixtures, $f(\theta)$ and $g(\theta)$, of the same order $M$,
\[f(\theta) = \sum_{i=1}^{M}\alpha_{i}f_{i}(\theta)\]
and
\[g(\theta) = \sum_{j=1}^{M}\beta_{i}g_{i}(\theta)\]
Then the KL divergence between them can be upper bounded by,
\begin{equation}
D_{KL}(f(\theta) || g(\theta)) \leq
D_{KL}(\alpha || \beta) + \sum_{i=1}^{M}\alpha_{i}D_{KL}(f_{i}(\theta) || g_{i}(\theta))
\end{equation}
\end{lemma}
where $D_{KL}(\alpha || \beta)$ is the KL divergence between the probability mass functions defined by all
the coefficients $\alpha_{i}$ and $\beta{i}$. The proof of this bound uses the sum log inequality and can
be found in \cite{Minh2003}.
\section{Tikhonov Mixture Canonical Model}
In this section we will present the Tikhonov mixture canonical model
for approximating the forward and backward phase noise SP messages. Firstly, we will give insight to the
motivation of using a mixture model for $p_{f}(\theta_{k})$ and $p_{b}(\theta_{k})$. The message, $p_{f}(\theta_{k})$, is the posterior phase distribution given the causal information $(r_{0},...,r_{k-1})$. If
we look at the (local) maximum over time we observe a phase trajectory. A phase trajectory is an
hypothesis about the phase noise process given the data. In case of zero a priori information, there will
be a $\frac{2\pi}{M}$ ambiguity in the phase trajectory, i.e. there will be $M$ parallel phase
trajectories with $\frac{2\pi}{M}$ separation between them.
Having a priori information on the data, such as preamble or pilots, can strengthen the correct hypothesis
and gradually remove wrong trajectories. However, as we get far away from the known data, more hypotheses
emerge. This dynamics is illustrated in Fig. \ref{fig:splits} where we have plotted in three dimensions
the forward phase noise messages ($p_{f}(\theta_{k})$) of the DP algorithm. The DP algorithm computes the
phase forward messages (\ref{pf}) on a quantized phase space. The axes
represent the time sample index, the quantized phase
for each symbol and the Z-axis is the posterior probability. In this figure there is only a small preamble
in the beginning and the end of the block and thus the first forward
messages are single mode Tikhonov distributions, which form a single
trajectory in the beginning of the figure and converges to a single trajectory in the end.
After the preamble, due to additive noise and phase noise, occasionally the algorithm cannot decide which
is the correct phase trajectory due to ambiguity in the symbols, thus it suggests to continue with two
trajectories each with its relative probability of occurring. This point is a split in the phase
trajectories and is analogous to a cycle slip in a PLL. If we approximate the messages at each point in
time as a a Tikhonov mixture with varying order, then each time we have a split, more components are added
to the mixture, and each time there is a merge, the number of components decreases.
This understating of the underlying structure of the phase messages is one of the most important
contributions of this paper and is the basis of the mixture model approach.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{splits1.eps}\\
\caption{SP Phase Noise Forward Messages}\label{fig:splits}
\end{figure}
The advantage of using mixtures is in the ability to track several
phase trajectories simultaneously and provide better extrinsic
information to the LDPC decoder, which in turn will provide better
information on the code symbols to the phase estimator. In this way
the joint detection and estimation will converge quickly and avoid
error floors. However, as will be shown in a later section, the approximation of SP
messages using mixtures is a very difficult task since the mixture
order increases exponentially as we progress the phase tracking along
the received block. Therefore, there is a need for an efficient dimension reduction algorithm. In the following sections we
will propose a mixture reduction algorithm for the adaptive mixture
model. But first we will formulate the mixture reduction task
mathematically and describe algorithms which attempt to accomplish this task.
\subsection{Mixture Reduction - Problem Formulation}
As proposed above, the forward and backward messages are approximated
using Tikhonov mixtures,
\begin{equation}\label{new_pf}
p_{f}(\theta_{k}) = \sum_{i
=1}^{N_{f}^{k}}\alpha^{k,f}_{i}t^{k,f}_{i}(\theta_{k})
\end{equation}
\begin{equation}\label{new_pb}
p_{b}(\theta_{k}) = \sum_{i
=1}^{N_{b}^{k}}\alpha^{k,b}_{i}t^{k,b}_{i}(\theta_{k})
\end{equation}
where:
\begin{equation}\label{f_f_i}
t^{k,f}_{i}(\theta_{k}) = \frac{e^{Re[z^{k,f}_{i}
e^{-j\theta_{k}}]}}{2\pi I_{0}(|z^{k,f}_{i}|)}
\end{equation}
\begin{equation}\label{f_b_i}
t^{k,b}_{i}(\theta_{k}) = \frac{e^{Re[z^{k,b}_{i}
e^{-j\theta_{k}}]}}{2\pi I_{0}(|z^{k,b}_{i}|)}
\end{equation}
and,
$\alpha^{k,f}_{i}$ ,$\alpha^{k,b}_{i}$,$z^{k,f}_{i}$,$z^{k,b}_{i}$ are the
mixture coefficients and Tikhonov parameters of the forward and
backward messages of the phase sample.
If we insert approximations (\ref{new_pf}) and (\ref{new_pb}) in to
the forward and backward recursion equations (\ref{pf}) and (\ref{pb})
respectively, we get,
\begin{multline}\label{pf_eq}
\tilde{p}_{f}(\theta_{k}) =
\sum_{i =1}^{N_{f}^{k-1}}\int_{0}^{2\pi}\alpha^{k-1,f}_{i}t^{k-1,f}_{i}(\theta_{k-1})p_{d}(\theta_{k-1})\\ p_{\Delta}(\theta_{k}-\theta_{k-1})d\theta_{k-1}
\end{multline}
\begin{multline}\label{pb_eq}
\tilde{p}_{b}(\theta_{k}) = \sum_{i=1}^{N_{b}^{k+1}}\int_{0}^{2\pi}\alpha^{k+1,b}_{i}t^{k+1,b}_{i}(\theta_{k+1})p_{d}(\theta_{k+1})\\ p_{\Delta}(\theta_{k+1}-\theta_{k})d\theta_{k+1}
\end{multline}
It is shown in \cite{barb2005} that the convolution of a Tikhonov and a Gaussian distributions is a
Tikhonov distribution,
\begin{equation}\label{pf_eq1}
\tilde{p}_{f}(\theta_{k}) =
\sum_{i =1}^{N_{f}^{k-1}}\sum_{x \in \mathbb{A}}\alpha^{k-1,f}_{i}\lambda^{k-1,f}_{i,x}\frac{e^{Re[\gamma(\sigma_{\Delta},\tilde{Z}^{k-1,f}_{i,x})e^{-j\theta_{k}}]}}{2\pi I_{0}(|\gamma(\sigma_{\Delta},\tilde{Z}^{k-1,f}_{i,x})|)}
\end{equation}
\begin{equation}\label{pb_eq1}
\tilde{p}_{b}(\theta_{k}) =
\sum_{i =1}^{N_{b}^{k+1}}\sum_{x \in \mathbb{A}}\alpha^{k+1,b}_{i}\lambda^{k+1,b}_{i,x}\frac{e^{Re[\gamma(\sigma_{\Delta},\tilde{Z}^{k+1,b}_{i,x})e^{-j\theta_{k}}]}}{2\pi I_{0}(|\gamma(\sigma_{\Delta},\tilde{Z}^{k+1,b}_{i,x})|)}
\end{equation}
where
\begin{equation}\label{Z_f}
\tilde{Z}^{k-1,f}_{i,x} = z^{k-1,f}_{i}+\frac{r_{k-1}x^{*}}{\sigma^2}
\end{equation}
\begin{equation}\label{coeff_f}
\lambda^{k-1,f}_{i,x} = \frac{1}{A}P_{d}(c_{k-1}=x)\frac{I_{0}(|\tilde{Z}^{k-1,f}_{i,x}|)}{I_{0}(|z^{k-1,f}_{i}|)}
\end{equation}
\begin{equation}\label{Z_b}
\tilde{Z}^{k+1,b}_{i,x} = z^{k+1,b}_{i}+\frac{r_{k+1}x^{*}}{\sigma^2}
\end{equation}
\begin{equation}\label{coeff_b}
\lambda^{k+1,b}_{i,x} = \frac{1}{B}P_{d}(c_{k+1}=x)\frac{I_{0}(|\tilde{Z}^{k+1,b}_{i,x}|)}{I_{0}(|z^{k+1,b}_{i}|)}
\end{equation}
\begin{equation}\label{gamma}
\gamma(\sigma_{\Delta},Z) = \frac{Z}{1+|Z|\sigma^{2}_{\Delta}}
\end{equation}
where $A$ and $B$ are a normalizing constants.
Therefore, equations (\ref{pf_eq1}) and (\ref{pb_eq1}) are Tikhonov mixtures of
order $N_{f}^{k}M$ and $N_{b}^{k}M$. Since we do not want to increase the mixture order every
symbol, a mixture dimension reduction algorithm must be derived which
captures ''most" of the information in the mixtures $\tilde{p}_{f}(\theta_{k})$ and
$\tilde{p}_{b}(\theta_{k})$, while keeping the computational complexity low. From
now on, we will present only the forward approximations, but the same
applies for the backward.
There are many metrics used for mixture reduction. The two most
commonly used are the Integral Squared Error (ISE) and the KL. The ISE
metric is defined for mixtures $f(\theta)$ and $g(\theta)$ as follows,
\begin{equation}\label{ise}
D_{ISE}(f(\theta)||g(\theta)) = \int_{0}^{2\pi}(f(\theta)-g(\theta))^{2}d\theta
\end{equation}
We chose the KL divergence for the cost function between the reduced
mixture and the original mixture rather than ISE, since the former is
expected to get better results. For example, assume a scenario where there is a low
probability isolated cluster of components, then if the reduction algorithm would prune that
cluster the ISE based cost will not be effected. However, the KL based reduction will have to assign a cluster
since the cost of not approximating it, is very high. In general, the KL divergence does not take in to account the probability of the
components while the ISE does. This feature of KL is useful since we wish to track all the significant
phase trajectories regardless of their probability.
We define the following mixture reduction task using the Kullback
Leibler divergence - \emph{Given a Tikhonov mixture $f(\theta)$ of
order $L$, find a Tikhonov mixture $g(\theta)$ of order $N$ ($L>N$),
which minimizes,}
\begin{equation}\label{obj}
D_{KL}(f(\theta)||g(\theta))
\end{equation}
\emph{where,}
\begin{equation}\label{orig_mix}
f(\theta)= \sum_{i =1}^{L}\alpha_{i}f_{i}(\theta)
\end{equation}
\begin{equation}\label{red_mix}
g(\theta)= \sum_{j =1}^{N}\beta_{j}g_{j}(\theta)
\end{equation}
where $f(\theta)$ is the mixture $\tilde{p}_{f}(\theta_{k})$ and the reduced order mixture $g(\theta)$
will be the next forward message, $p_{f}(\theta_{k})$.
We would like to provide an additional insight to choosing KL. The information theoretic meaning of KL
divergence is that we wish that the
loss in bits when compressing a source of probability $f(\theta)$,
with a code matched to the probability $g(\theta)$ will be not
larger than $\epsilon$. Thus, we wish to find a lower order mixture $f(\theta)$ which is a compressed
version of $f(\theta)$.
\subsection{Mixture Reduction algorithms - Review}
There is no analytical solution for (\ref{obj}), but there are many mixture reduction algorithms which
provide a suboptimal
solution for it. They can be generally classified in to two
groups, \emph{local} and \emph{global} algorithms. The
\emph{global} algorithms attempt to solve (\ref{obj}) by gradient
descent type solutions which are very computationally demanding. The \emph{local} algorithms usually start
from a large
mixture and prune out components/merge similar components, according
to some rule, until a target mixture order is reached. A very
good summary of many of these algorithms can be found in
\cite{sv2011}. The \emph{global} algorithms do not deal with KL
divergence and thus are not suited for our problem. We will review two
\emph{local} algorithms in the following section which provide the
best performance in the sense of best balancing the tradeoff between
complexity and accuracy, and show why they fail for our case.
The first algorithm is the one proposed in \cite{runnalls2007}. This
algorithm minimizes a \emph{local} problem, which \emph{sometimes} provides a
good approximation for (\ref{obj}).
Given (\ref{orig_mix}), the algorithm finds a pair of mixture components, $f_{i^{*}}$ and
$f_{k^{*}}$ which satisfy,
\begin{equation}\label{runnalls}
[i^{*},k^{*}] = arg\min_{i,k}D_{KL}(\alpha f_{i} + (1-\alpha)
f_{k}||g_{j}(\theta))
\end{equation}
where,
\begin{equation}\label{runnalls1}
g_{j}(\theta) = CMVM(\alpha f_{i} + (1-\alpha) f_{k})
\end{equation}
and $\alpha$ is normalized probability of $f_{i}$ after dividing by the
sum of the probabilities of $f_{i}$ and $f_{k}$.
The algorithm merges the two components to $g_{j}(\theta)$, thus the order of (\ref{orig_mix}) has now
decreased by one. This procedure is now repeated on the new mixture iteratively
to find another optimal pair until the target mixture order is reached.
It should be noted that the component's probability influences
the metric (\ref{runnalls}). Suppose we have two very different components, one with
high probability and another with very low probability, which is the correct hypothesis. Then the
algorithm may choose to
cluster them, and the low probability component will be lost which may be the correct trajectory.
Another algorithm is the one proposed in \cite{goldberger2004hierarchical}, which also does not directly
solve
(\ref{obj}), but defines another metric which is much easier to handle
mathematically. The algorithm's operation is very similar to the
K-means algorithm. It first chooses an initial reduced mixture
$g(\theta)$ and then iteratively performs the following,
\begin{enumerate}
\item \emph{Select the clusters} - Map all $f_{i}$ to the $g_{j}$
which minimizes $D_{KL}(f_{i}||g_{j})$
\item \emph{Regroup} - For all $j$, optimally cluster the elements
$f_{i}$ which were mapped to each $g_{j}$ to create the new
$g(\theta)$
\end{enumerate}
This algorithm is dependent on initial conditions in order to converge to the lowest mixture. Also, the
iterative
process increases the computational complexity significantly.
In \cite{goldberger2004hierarchical} and \cite{runnalls2007}, the
Gaussian case was considered, thus the clustering was performed using
Gaussian moment matching. For our setting, we have taken the liberty to change the moment
matching to CMVM, since we have Tikhonov distributions and not Gaussian. Note that in both algorithms, the
target order must
be defined before operation, since they have to know when to stop.
Selecting the proper target mixture order is a difficult task.
On one hand, if we choose a large target order, then the complexity
will be too high. On the other hand, if we choose the order to be low
then the algorithm may cluster components which clearly need not be
merged but since they provide the minimal KL divergence, they are
clustered.
Therefore, in order to maintain a good level of accuracy, the task
should be to guarantee an upper bound on the KL divergence and not try
to unsuccessfully minimize it. Moreover, it should be noted that in
our setting the mixture reduction task (\ref{obj}), is performed many
times and not once. Therefore, there may not be a need to have the
same reduced mixture order for each symbol. These ideas will lead us to
the approach presented in the next section of the \emph{adaptive}
mixture canonical model.
\section{A New Approach to Mixture Reduction}
We have seen that the current state of the art low complexity mixture
reduction algorithms based on a fixed target mixture order do not provide good enough approximations to
(\ref{obj}). Moreover, the choice of the mixture order plays a crucial part in the clustering task. On one
hand, a small mixture will provide poor
SP message approximation which will propagate over the factor graph
and cause a degradation in performance. On the other hand, a large
mixture order will demand too many computational resources.
Instead of reducing (\ref{pf_eq1}) and (\ref{pb_eq1}) to a fixed order,
we propose a new approach which has better accuracy while keeping low
complexity. Since we are performing Bayesian inference on a large data
block, we have many mixture reductions to perform rather than just a single reduction. Therefore, in terms
of
computational complexity, it is useful to use different mixture orders
for different symbols and look at the average number of components as
a measure of complexity. This new observation is
critical in achieving high accuracy and low PER while keeping
computational complexity low.
We define the \emph{new mixture reduction task} -
\emph{Given a Tikhonov mixture $f(\theta)$,}
\begin{equation}\label{in}
f(\theta) = \sum_{i=1}^{L}\alpha_{i}f_{i}(\theta)
\end{equation}
\emph{Find the Tikhonov mixture $g(\theta)$ with the minimum number of
components $N$}
\begin{equation}\label{out}
g(\theta) = \sum_{j=1}^{N}\beta_{j}g_{j}(\theta)
\end{equation}
\emph{which satisfy,}
\begin{equation}\label{task}
D_{KL}(f(\theta)||g(\theta)) \leq \epsilon
\end{equation}
Solving this new task will guarantee that the accuracy of the
approximation is upper bounded so we can keep the PER
levels low. Moreover, simulations show that the resulting mixtures are of very small sizes. In the
following section, we will show a low complexity algorithm which finds a mixture $g(\theta)$ whose average
number of mixture components is low.
\subsection{Mixture Reduction Algorithm}
In this section, a mixture reduction algorithm is proposed which is suboptimal in
the sense that it does not have the minimal number of components, but
finds a low order mixture which satisfies (\ref{task}), for any $\epsilon$. The
algorithm, whose details are given in pseudo-code in Algorithm 1, uses
the CMVM approach, for optimally merging a Tikhonov mixture to a
single Tikhonov distribution.
\begin{algorithm}
\caption{Mixture Reduction Algorithm}
\label{mix_redc_algo}
{\fontsize{10}{5}
\begin{algorithmic}
\State $j \gets 1$
\While{$|f(\theta)| > 0$}
\State $lead \gets argmax\{\underline{\alpha}\}$
\For{$i = 1 \to |f(\theta)|$}
\If {$D_{KL}(f_{i}(\theta) || f_{lead}(\theta)) \leq \epsilon $}
\State $idx \gets [idx , i]$
\EndIf
\EndFor
\State $\beta_{j} \gets \sum_{i\in idx}\alpha_{i}$
\State $g_{j}(\theta) \gets CMVM(\sum_{i\in
idx}\frac{\alpha_{i}}{\beta_{j}}f_{i}(\theta))$
\State $f(\theta) \gets f(\theta) - \sum_{i \in idx}{\alpha_{i}f_{i}(\theta)} $
\State Normalize $f(\theta)$
\State $j \gets j + 1$
\EndWhile
\end{algorithmic}}
\end{algorithm}
The input to this algorithm, $f(\theta)$, is the Tikhonov mixture (\ref{pf_eq1}) and the output Tikhonov
mixture $g(\theta)$ is a reduced version of
$f(\theta)$ and approximates the next forward or backward messages. Note that the function $|f(\theta)|$
outputs the number of Tikhonov components in the Tikhonov mixture $f(\theta)$. The computations of $D_{KL}(f_{i}(\theta) || f_{lead}(\theta))$ and $CMVM(\sum_{i\in idx}\frac{\alpha_{i}}{\beta_{j}}f_{i}(\theta))$ are detailed in appendices (\ref{sec:kl_comp}) and (\ref{sec:cmvm_exmp}).
In the beginning of each iteration, the algorithm selects the highest probability mixture component and
clusters it with all the components which are similar to it (KL sense). It then finds the next highest
probability component and performs the same until there are no components left to cluster. We will now
show that for any $\epsilon$, the algorithm satisfies (\ref{task}).
\begin{theorem}
\emph{(Mixture Reduction Accuracy):}
\label{mix_red_acc}
Let $f(\theta)$ be a Tikhonov mixture of order $L$ and $\epsilon$ be a real positive number. Then,
applying the Mixture Reduction Algorithm 1 to $f(\theta)$ using $\epsilon$, produces a Tikhonov mixture
$g(\theta)$, of order $N$ which satisfies,
\begin{equation}\label{mix_red_thr}
D_{KL}(f(\theta)||g(\theta)) \leq \epsilon
\end{equation}
\end{theorem}
\begin{proof}
In the first iteration, the algorithm selects the highest probability mixture component of (\ref{in}) and
denotes it as $f_{lead}(\theta)$.
Let $M_{0}$, be the set of mixture components $f_{i}(\theta)$ selected for clustering,
\begin{equation}\label{rule_kl}
M_{0} = \{f_{i}(\theta) \: | \: D_{KL}(f_{i}(\theta) || f_{lead}(\theta)) \leq \epsilon \}
\end{equation}
and $M_{1}$ be the set of mixture components which were not selected,
\begin{equation}\label{rule_kl}
M_{1} = \{f_{i}(\theta) \: | \: D_{KL}(f_{i}(\theta) || f_{lead}(\theta)) > \epsilon \}
\end{equation}
Thus,
\begin{equation}\label{16}
\sum_{i \in M_{0}}\frac{\alpha_{i}}{\beta_{1}}D_{KL}(f_{i}(\theta)||f_{lead}(\theta))
\leq \epsilon
\end{equation}
where,
\begin{equation}\label{15}
\beta_{1} = \sum_{i \in M_{0}}\alpha_{i}
\end{equation}
Using Lemma (\ref{kl_bound1}),
\begin{equation}\label{new_KL2}
D_{KL}\left(\sum_{i \in
M_{0}}\frac{\alpha_{i}}{\beta_{1}}f_{i}(\theta)||f_{lead}(\theta)\right) \leq
\epsilon
\end{equation}
The algorithm then clusters all the distributions in $M_{0}$ using CMVM,
\begin{equation}\label{17}
g_{1}(\theta) =
CMVM\left(\sum_{i \in M_{0}}\frac{\alpha_{i}}{\beta_{1}}f_{i}(\theta)\right)
\end{equation}
then, using Theorem (\ref{mix_tikh_thr}),
\begin{multline}\label{new_KL2}
D_{KL}\left(\sum_{i \in
M_{0}}\frac{\alpha_{i}}{\beta_{1}}f_{i}(\theta)||g_{1}(\theta)\right) \leq \\
D_{KL}\left(\sum_{i \in
M_{0}}\frac{\alpha_{i}}{\beta_{1}}f_{i}(\theta)||f_{lead}(\theta)\right)
\end{multline}
which means that,
\begin{equation}\label{new_KL3}
D_{KL}\left(\sum_{i \in
M_{0}}\frac{\alpha_{i}}{\beta_{1}}f_{i}(\theta)||g_{1}(\theta)\right) \leq
\epsilon
\end{equation}
We can rewrite the mixtures $f(\theta)$ and $g(\theta)$ in the following way,
\begin{equation}\label{new_f}
f(\theta) = \alpha_{M_{0}}f_{M_{0}}(\theta) + \alpha_{M_{1}}f_{M_{1}}(\theta)
\end{equation}
\begin{equation}\label{new_g}
g(\theta) = \beta_{1}g_{1}(\theta) + (1-\beta_{1})h(\theta)
\end{equation}
where,
\begin{equation}\label{thr_2}
\alpha_{M_{0}} = \sum_{i \in M_{0}}\alpha_{i}
\end{equation}
\begin{equation}\label{thr_2_1}
\alpha_{M_{1}} = \sum_{i \in M_{1}}\alpha_{i}
\end{equation}
\begin{equation}\label{thr_3}
f_{M_{i}}(\theta) = \sum_{j \in M_{i}}\frac{\alpha_{j}}{\alpha_{M_{i}}}f_{j}(\theta)
\end{equation}
Using (\ref{15}),
\begin{equation}\label{thr_3}
\alpha_{M_{i}} = \beta_{i}
\end{equation}
Therefore (\ref{new_f}) and (\ref{new_g}) are two mixtures of the same size and have exactly the same
coefficients, thus the KL of the probability mass functions induced by the coefficients of both mixtures
is zero. Using Lemma (\ref{kl_bound3}),
\begin{multline}\label{new_KL4}
D_{KL}(f(\theta) || g(\theta)) \leq \beta_{1}D_{KL}(f_{M_{0}}(\theta) || g_{1}(\theta) \\ + (1 -\beta_{1})D_{KL}(f_{M_{1}}(\theta) || h(\theta))
\end{multline}
using (\ref{new_KL2}) we get,
\begin{multline}\label{new_KL5}
D_{KL}(f(\theta) || g(\theta)) \leq \beta_{1}\epsilon \\ + (1 - \beta_{1})D_{KL}(f_{M_{1}}(\theta) ||h(\theta))
\end{multline}
If we find a Tikhonov mixture $h(\theta)$ ,which satisfies,
\begin{equation}\label{new_KL7}
D_{KL}(f_{M_{1}}(\theta) || h(\theta)) \leq \epsilon
\end{equation}
then we will prove the theorem. But (\ref{new_KL7}) is exactly the same as the original problem, thus
applying the same clustering steps as described earlier on the new mixture $f_{M_{1}}(\theta)$ will
ultimately satisfy,
\begin{equation}\label{new_KL6}
D_{KL}(f(\theta) || g(\theta)) \leq \epsilon
\end{equation}
\end{proof}
\subsection{Mixture Reduction As Phase Noise Tracking}
\label{sec:phase_tracking}
Recall in Fig. \ref{fig:splits}, that the phase noise messages can be
viewed as multiple separate phase trajectories, then the mixture reduction
algorithm can be viewed as a scheme to map the different mixture components to
different phase trajectories. The mixture reduction algorithm receives
a mixture describing the next step of all the trajectories and assigns it to a specific trajectory, thus
we are
able to accurately track all the hypotheses for all the phase trajectories.
Assuming slowly varying phase noise and high SNR, the mixture reduction tracking loop $i$, $\hat{\theta}^{i}_{k}$ for each trajectory can be computed in the following manner,
\begin{equation}\label{pll_equiv_op}
\hat{\theta}^{i}_{k} = \hat{\theta}^{i}_{k-1} +
\frac{|r_{k-1}||c_{t}|}{G_{k-1}\sigma^2}(\angle{r_{k-1}}+\angle{c_{t}}-\hat{\theta}^{i}_{k-1})
\end{equation}
where, $c_{t}$ and $G_{k-1}$ are a soft decision of the constellation
symbol and the inverse conditional MSE for $\hat{\theta}_{k-1}$,
respectively. The proof for this claim is provided in appendix \ref{sec:pll}.
Thus the mixture reduction is equivalent to multiple soft decision first order PLLs with adaptive loop
gains. Whenever the mixture components of the SPA message become too far apart, a split occurs and
automatically the number of tracking loops increases in order to track the new trajectories.
\subsection{Limited Order Adaptive Mixture}
In the previous section, we have presented an algorithm which
\emph{adaptively} changes the canonical model's mixture order, with no
upper bound. This enabled us to track \emph{all} the significant phase
trajectories in the SP messages. However, there may be scenarios with limited complexity, in which we are
forced
to have a limited number of mixture components, thus we can track only a limited number of phase
trajectories. If the number of significant phase trajectories is larger than the
maximum number of mixture components allowed, then we might miss the correct
trajectory. For example, if we limit the number of tracked trajectories to one, we get an algorithm very
close to a PLL. In this case whenever a split event occurs, we have to choose one of the trajectories and
abandon the other and in case we chose the wrong one, we experience a cycle slip. Analogously we can call
cycle slip the event of missing the right trajectory even when more than one trajectory is available.
In this section, assuming pilots are present, we propose an improvement to Algorithm 1, which provides a
solution to the missed trajectories problem. The improved algorithm still uses a mixture canonical
model for the approximation of messages in the SPA but with an additional
variable $\phi^{f}_{k}$ (for backward recursions $\phi^{b}_{k}$ ), which approximates, online, the
probability that
the tracked trajectories include the correct one. This approach enables us to track phase
trajectories while maintaining a level of their confidence. We apply the previously used clustering based
on the KL
divergence in order to select which of the components of the mixture
are going to be approximated by a Tikhonov mixture, while the rest of
the components will be ignored, but their total probabilities will be accumulated. We then use pilot
symbols and $\phi^{f}_{k}$ in order to regain tracking if a cycle slip has occurred. This approach proves
to be robust to
phase slips and provides a high level of accuracy while keeping a low
computational load. The resulting algorithm was shown, in simulations,
to provide very good performance in high phase noise level and very
close to the performance of the optimal algorithm even for
mixtures of order 1,2 and 3.
\newline
\subsubsection{Modified Reduction Algorithm}
We denote the modification of Algorithm 1 for limited complexity, as Algorithm 2. This algorithm selects
\textbf{some} components from a Tikhonov mixture, $f(\theta)$ and clusters them to an output Tikhonov
mixture $g(\theta)$ of maximum order $L$.
\begin{algorithm}
\caption{Modified Mixture Reduction Algorithm}
\label{mix_redc_algo}
{\fontsize{10}{5}
\begin{algorithmic}
\State $j \gets 1$
\While{$j \leq L$ or $|f(\theta)| > 0$}
\State $lead \gets argmax\{\underline{\alpha}\}$
\For{$i = 1 \to |f(\theta)|$}
\If {$D_{KL}(f_{i}(\theta) || f_{lead}(\theta)) \leq \epsilon $}
\State $idx \gets [idx , i]$
\EndIf
\EndFor
\State $\beta_{j} \gets \sum_{i\in idx}\alpha_{i}$
\State $g_{j}(\theta) \gets CMVM(\sum_{i\in
idx}\frac{\alpha_{i}}{\beta_{j}}f_{i}(\theta))$
\State $f(\theta) \gets f(\theta) - \sum_{i \in idx}{\alpha_{i}f_{i}(\theta)} $
\State Normalize $f(\theta)$
\State $j \gets j + 1$
\EndWhile
\State $\phi^{f}_{k} \gets (\sum_{j}\beta_{j})\phi^{f}_{k-1}$
\end{algorithmic}}
\end{algorithm}
We initialize $\phi^{f}_{0} = 1$, which means that in the first received sample, for the forward
recursion, there is no cycle slip. Note that Algorithm 2, is identical to Algorithm 1 apart for the
computation of $\phi^{f}_{k}$.
For each iteration, Algorithm 2, selects the most probable component in (\ref{pf_eq1}) and clusters all
the mixture components similar to it. The algorithm then removes this cluster and finds another cluster
similarly. When there are no more components in $f(\theta)$ or the maximum allowed mixture order is
reached, the algorithm computes $\phi^{f}_{k}$. As discussed earlier, this variable represents the
probability that a cycle slip has not occurred. The algorithm sums up the probabilities of the clustered
components in $f(\theta)$ and multiplies that with $\phi^{f}_{k-1}$ to get $\phi^{f}_{k}$. Suppose we have
clustered all the components in $f(\theta)$, then $\phi^{f}_{k-1}$ will be equal to $\phi^{f}_{k}$. That
suggests that the probability that a cycle slip has occurred before sample $k-1$ is the same as for sample
$k$. This is in agreement with the fact that no trajectories were ignored at the reduction from $k-1$ to
$k$. For low enough $\epsilon$ , $\phi^{f}_{k}$ is a good approximation of that probability.
\newline
\subsubsection{Recovering From Cycle Slips}
In this section, we propose to use $\phi^{f}_{k-1}$, the probability that a cycle has not occurred, and
the information conveyed by pilots in order to combat cycle slips. In case of a cycle slip, the phase
message estimator based on the tracked trajectories is useless and we need to find a better estimation of
the phase message. We propose to estimate the message using \textbf{only} the pilot symbol, $p_{d}(\theta_{k-1})$. However, if a cycle slip has not occurred, then estimating the phase message based
\textbf{only} on the pilot symbol might damage our tracking. Therefore, once a pilot symbol arrives, we
will \textbf{average} the two proposed estimators according to $\phi^{f}_{k-1}$,
\begin{equation}\label{avg_cycle}
q_{f}(\theta_{k-1}) = \phi^{f}_{k-1}p_{f}(\theta_{k-1})+(1-\phi^{f}_{k-1})\frac{1}{2\pi}
\end{equation}
If a cycle slip has occurred and $\phi^{f}_{k-1}$ is low, then the pilot will, in high probability,
correct the tracking. We present the proposed approach in pesudo-code in Algorithm (\ref{cycle_slip_algo}).
\begin{algorithm}
\caption{Forward Message Computation with Cycle Slip Recovery}
\label{cycle_slip_algo}
{\fontsize{10}{5}
\begin{algorithmic}
\State $p_{f}(\theta_{0}) \gets \frac{1}{2\pi}$
\State $\phi^{f}_{0} \gets 1$
\State $k \gets 1$
\While{$k \leq K$}
\State Compute $p_{d}(\theta_{k-1})$
\If {$c_{k-1}$ is a pilot}
\State $ q_{f}(\theta_{k-1}) \gets \phi^{f}_{k-1}p_{f}(\theta_{k-1})+(1-\phi^{f}_{k-1})\frac{1}{2\pi}$
\State $t \gets 1$
\Else
\State $ q_{f}(\theta_{k-1}) \gets p_{f}(\theta_{k-1}) $
\State $t \gets \phi^{f}_{k-1}$
\EndIf
\State $\tilde{p}_{f}(\theta_{k}) \gets \int_{0}^{2\pi}q_{f}(\theta_{k-1})p_{d}(\theta_{k-1})p_{\Delta}(\theta_{k}-\theta_{k-1})d\theta_{k-1}$
\State $[p_{f}(\theta_{k}),\phi^{f}_{k}] \gets Algorithm2(\tilde{p}_{f}(\theta_{k}),t)$
\State $k \gets k + 1$
\EndWhile
\end{algorithmic}}
\end{algorithm}
\section{Computation of $P_{u}(c_{k})$}
As discussed in section (\ref{sys_model}), after computing the forward and backward messages, the next
step of the SP algorithm is to compute $P_{u}(c_{k})$. These messages describe the LLR of a code symbol
based on the channel part of the factor graph. These messages are sent to the LDPC decoder and the correct
approximation of these messages is crucial for the decoding of
the LDPC.
When using Algorithm 1 for the computation of the forward and backward messages, we use the reduced
mixtures with (\ref{Pu}) and analytically compute the message. However, when using a limited order mixture
and Algorithm 2 with the cycle slip recovery method in Algorithm 3, we use $\phi^{f}_{k}$ and $\phi^{b}_{k}$ in order to better the estimation of the messages. Thus $P_{u}(c_{k})$ is a weighted summation of
four components which can be interpreted as conditioning on the probability that a phase
slip has occurred for each recursion (forward and backward). This will
ensure that the computation of $P_{u}(c_{k})$ is based on the most
reliable phase posterior estimations, even if a phase slip has
occurred in a single recursion (forward or backward).
We insert the mixture (\ref{avg_cycle}) into (\ref{Pu}),
\begin{equation}\label{Pu_new1}
P_{u}(c_{k}) \propto \int_{0}^{2\pi}q_{f}(\theta_{k})q_{b}(\theta_{k})e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
where $q_{f}(\theta_{k})$ and $q_{b}(\theta_{k})$ are defined in Algorithm 3.
We decompose the computation to a summation of four components,
\begin{equation}\label{Pu_new}
P_{u}(c_{k}) \propto A + B + C + D
\end{equation}
where
\begin{equation}\label{A}
A = \phi^{f}_{k}\phi^{b}_{k}\int_{0}^{2\pi}p_{f}(\theta_{k})p_{b}(\theta_{k})e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
\begin{equation}\label{B}
B = \phi^{f}_{k}(1-\phi^{b}_{k})\int_{0}^{2\pi}p_{f}(\theta_{k})\frac{1}{2\pi}e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
\begin{equation}\label{C}
C = (1-\phi^{f}_{k})\phi^{b}_{k}\int_{0}^{2\pi}\frac{1}{2\pi}p_{b}(\theta_{k})e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
\begin{equation}\label{D}
D = (1-\phi^{b}_{f})(1-\phi^{b}_{k})\int_{0}^{2\pi}\frac{1}{2\pi}\frac{1}{2\pi}e_{k}(c_{k},\theta_{k})d\theta_{k}
\end{equation}
We will detail the computation of $A$, but the same applies to the
other components of (\ref{Pu_new}). We use the mixture form defined in (\ref{new_pf}) and (\ref{new_pb}).
We define the following,
\begin{equation}\label{Z_mix}
Z_{\psi} = z^{k,f}_{i}+z^{k,b}_{j}+\frac{r_{k}c^{*}_{k}}{\sigma^{2}}
\end{equation}
and get,
\begin{equation}\label{19}
A = \sum_{i=1}^{N_{f}^{k}}\sum_{j=1}^{N_{b}^{k}}\alpha^{k,f}_{i}\alpha^{k,b}_{j}\frac{I_{0}(|Z_{\psi}|)}{2\pi
I_{0}(|z^{k,f}_{i}|)I_{0}(|z^{k,b}_{j}|)}
\end{equation}
When implementing the algorithm in log domain, we can simplify (\ref{19}), by using (\ref{assumps3}),
\begin{multline}\label{20}
\log\left(\frac{I_{0}(|Z_{\psi}|)}{2\pi
I_{0}(|z^{k,f}_{i}|)I_{0}(|z^{k,b}_{j}|)}\right) \approx |Z_{\psi}| - |z^{k,f}_{i}| - |z^{k,b}_{j}| \\ -\frac{1}{2}\log\left(\frac{|Z_{\psi}|}{|z^{k,f}_{i}||z^{k,b}_{j}|}\right)
\end{multline}
and for large enough $|z^{k,f}_{i}|$ and $|z^{k,b}_{j}|$
\begin{equation}\label{21}
\log\left(\frac{I_{0}(|Z_{\psi}|)}{2\pi
I_{0}(|z^{k,f}_{i}|)I_{0}(|z^{k,b}_{j}|)}\right) \approx |Z_{\psi}| - |z^{k,f}_{i}| - |z^{k,b}_{j}|
\end{equation}
\section{Complexity}
In this section we will detail the computational complexity of the proposed algorithms and compare the
complexity to the DP and BARB algorithms. Since the mixture order changes between symbols and LDPC
iterations, we can not give an exact expression for the computational complexity. Therefore, in order to
assess the complexity of the algorithms, we denote the average number of components in the canonical model
per sample, as $\gamma(i)$, where $i$ is the index of the LDPC iteration. $\gamma(i)$, decreases in
consecutive LDPC iterations due to the fact that the LDPC decoder provides better soft information on the
symbols thus resolving ambiguities and decreasing the required number of components in the mixture. This
value, $\gamma(i)$, depends mainly on the number of ambiguities that the phase estimation algorithm
suffers. These ambiguities are a function of the SNR, phase noise variance and algorithmic design
parameters such as the number of LDPC iteration, KL threshold - $\epsilon$ and the pilot pattern.
The significant difference in computational complexity between the DP and the mixture based algorithms
stems from the fact that multi modal SPA messages are not well characterized by a single Tikhonov and the
DP algorithm must use many quantization levels to accurately describe them. However, the mixture algorithm
is successful in characterizing these messages using few mixture parameters and this difference is very
significant as the modulation order increases.
The mixture algorithm starts out by approximating the forward and backward messages using Tikhonov
mixtures. These mixtures are then inserted in to (\ref{pf}) and (\ref{pb}) to produce larger mixtures
(\ref{pf_eq1}) and (\ref{pb_eq1}). Next, the mixture reduction scheme produces a reduced mixture which is
used to compute $P_{u}(c_{k})$. On average, for a given LDPC iteration $i$, the forward message, $p_{f}(\theta_{k})$, is a Tikhonov mixture of order $\gamma(i)$. After applying (\ref{pf}), the mixture
increases to order $M\gamma(i)$ and is sent to the mixture reduction algorithm. Also on average, the
clustering algorithm performs $\gamma(i)$ clustering operations on $M$ components. The clustered mixtures
are then used to compute $P_{u}(c_{k})$ which is a multiplication of the forward and backward mixtures.
In appendices (\ref{sec:cmvm_exmp}) and (\ref{sec:kl_comp}), we have described the computation of the KL
divergence, $D_{KL}(f_{i}(\theta) || f_{lead}(\theta))$ and the application of the CMVM operator on the
clustered components - $g_{j}(\theta) \gets CMVM(\sum_{i\in idx}\frac{\alpha_{i}}{\beta_{j}}f_{i}(\theta))$.
In order to further reduce the complexity of the proposed algorithm, the variables representing
probabilities are stored in log domain and summation of these variables is approximated using the $\max$
operation. We also use the fact that for large $x$, $\log(I_{0}(x)) \approx x$ and approximate the KL
divergence in (\ref{kl_comp_full}) as,
\begin{equation}\label{kl_comp_approx}
D_{KL} \approx |z_{2}|(1-cos(\angle z_{1} - \angle z_{2}))
\end{equation}
There is an option to abandon the clustering altogether, and replace it by a component selection algorithm,
which maintains the specified accuracy but requires more components in return. Now the complexity of
clustering is traded against the complexity of other tasks. The selection algorithm is a simple
modification in the algorithm. Instead of using CMVM to cluster several close components, we simply choose
$f_{lead}(\theta)$ as the result of the clustering.
Recalling (\ref{new_KL2}), we note that $f_{lead}(\theta)$ satisfies the accuracy condition and Theorem
\ref{mix_red_acc} still holds. Thus we will not suffer degradation in maximum error if we use this approximation
and not CMVM. However, the mean number of mixture components will increase since we do not perform any
clustering. The CMVM operator actually reduces the KL divergence between the original mixture and the
reduced mixture to much less than $\epsilon$. Therefore, when using CMVM, the reduced mixture is much
smaller than needed to satisfy the accuracy condition. In order to get the same performance with the
reduced algorithm, we need to decrease $\epsilon$ and use more components.
The reduced complexity is summarized in Table \ref{tab:model_complexity_reduced}, and compared to DP and
BARB. $Q$ is the number of quantization levels per constellation symbol in the DP algorithm. We only count
multiplication and LUT operations since they are more costly than additions. We assume that the cosine
operation is implemented using a look up table.
\begin{table*}[!ht]
\caption{Computational load per code symbol per iteration for M-PSK
constellation}
\centering
\begin{tabular}{p{1cm}|p{4cm}|p{2cm}|p{3cm}}
&DP & BARB & Limited Order
\\
\hline\hline
MULS & $4Q^{2}M^{2}+2M^{2}Q + 6MQ + M$ & $7M + 5$ & $4M\gamma(i)^{2}+2M(\gamma(i)+1)$\\
& & &\\
\hline
LUT & $QM$ & $3M$ &$3M\gamma(i)^{2}-\gamma(i)(2M-1)$\\
& & &\\
\hline
\end{tabular}
\label{tab:model_complexity_reduced}
\end{table*}
\section{Numerical Results}
In this section, we analyze the performance of the algorithms proposed in this paper. The performance
metrics of a decoding scheme is comprised of two parameters - the Packet/Bit Error Rate (PER/BER) and the
computational complexity. We use the DP algorithm as a benchmark for the lowest achievable PER and the
algorithm proposed in
\cite{barb2005}, denoted before as BARB as a benchmark for a state of the art low complexity scheme. The
phase noise model used in all the simulations is a Wiener process and the DP algorithm was simulated using
16 quantization levels between two constellation points. Also, note that the simulation results presented
in this paper use an MPSK constellation but the algorithm can also be applied, with small changes, to QAM
or any other constellation.
In Fig. \ref{fig:ber8psk} and \ref{fig:per8psk}, we show the BER and PER results for an 8PSK constellation
with an LDPC code of length 4608 with code rate 0.89. We chose $\sigma_{\Delta}=0.05$[rads/symbol] and a
single pilot was inserted every 20 symbols.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{ber_8psk_20p_0_0_5.eps}\\
\caption{Bit Error Rate error rate - 8PSK , $\sigma_{\Delta}=0.05$, Pilot Frequency $= 0.05$}\label{fig:ber8psk}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{per_8psk_20p_0_0_5.eps}\\
\caption{Packet Error Rate error rate - 8PSK , $\sigma_{\Delta}=0.05$, Pilot Frequency $= 0.05$}\label{fig:per8psk}
\end{figure}
The algorithms simulated were the unlimited order algorithm, the limited order algorithm with varying
mixture orders (1,2 and 3) and the reduced complexity algorithm of Order 3 (denoted Reduced Complexity Size 3). We
can see that the unlimited mixture, the limited order mixtures of order 2 and 3 and the reduced complexity
algorithm provide almost identical results, which are close to the
performance of the DP algorithm. On the other hand, the BARB algorithm has significant degradation with
respect to all the algorithms. We note that a mixture with only one component can not describe the phase
trajectory well enough to have PER levels like DP, but this algorithm is still better than BARB.\\
In Figs. \ref{fig:perbpsk},\ref{fig:perqpsk} and \ref{fig:per32psk} we show the PER results for a
BPSK,QPSK and 32PSK constellations respectively with the same code used earlier. For the BPSK and QPSK
scenarios we simulated the phase noise using $\sigma_{\Delta}=0.1$[rads/symbol] and for 32PSK we used $\sigma_{\Delta}=0.01$[rads/symbol]. A single pilot was inserted according to the pilot frequency detailed
in each figure's caption.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{per_bpsk_80p_0_1.eps}\\
\caption{Packet Error Rate error rate - BPSK , $\sigma_{\Delta}=0.1$, Pilot Frequency $= 0.0125$}\label{fig:perbpsk}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{per_qpsk_20p_0_1.eps}\\
\caption{Packet Error Rate error rate - QPSK , $\sigma_{\Delta}=0.1$, Pilot Frequency $= 0.05$}\label{fig:perqpsk}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{per_32psk_001_40pilots.eps}\\
\caption{Packet error rate - 32PSK, $\sigma_{\Delta}=0.01$, Pilot Frequency $= 0.025$}\label{fig:per32psk}
\end{figure}
We can see that the mixture of order 2 is close to the
performance of the optimal algorithm, even when very few pilots are
present and the code rate and constellation order are high. One should also observe that for the 32PSK
scenario, the BARB algorithm demonstrates a high error floor. This is because of the large phase noise
variance and large spacing
between pilots which causes the SPA messages to become uniform
and thus do not provide information for the LDPC decoder. The high
code rate amplifies this problem. However, the limited algorithm with only one Tikhonov component performs
almost as well as the DP algorithm. This is due to the cycle slip recovery procedure we have presented
earlier which enables the limited algorithm to regain tracking even after missing the correct trajectory.\\
In Fig. \ref{fig:avg_comps8_3lobes_eps4} we present the average number
of mixture components, for different SNR and
LDPC iterations for $\epsilon = 4$. It can be seen that for the first iteration, many components are
needed since there is a high level of phase
ambiguity. As the iterations progress the LDPC decoder sends better soft information for
the code symbols, resolving these ambiguities. Therefore, the average
number of mixture components becomes closer to $1$.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{8psk_005_mean_hyp_eps_4.eps}\\
\caption{8PSK Mean Number of Tikhonov Mixture Components - Full Algorihtm, Maximum 3 lobes}\label{fig:avg_comps8_3lobes_eps4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{8psk_005_mean_hyp_eps_1.eps}\\
\caption{8PSK Mean Number of Tikhonov Mixture Components - Reduced Complexity Algorithm, Maximum 3lobes}\label{fig:avg_comps8_3lobes_eps1}
\end{figure}
In Fig. \ref{fig:avg_comps8_3lobes_eps1} we present the average number
of mixture components for the reduced complexity algorithm, for different SNR and
LDPC iterations for $\epsilon = 1$. We chose $\epsilon$ to be lower since we do not use the CMVM operator
as described earlier. As shown in this figure, the mean number of components is larger than for $\epsilon= 4$ but the overall complexity is still manageable.
In Table (\ref{tab:complexity_8psk_reduced}), the computational complexity of the reduced complexity
algorithm is compared to the DP and BARB algorithms. We use the mean mixture in Fig. \ref{fig:avg_comps8_3lobes_eps1} as $\gamma$. We can see that the algorithms proposed in this contribution,
have extremely less computational complexity than DP, while having comparable PER levels to it.
\begin{table*}[!ht]
\caption{Simulation Results - Computational load per code symbol for 8PSK constellation at $\frac{E_{b}}{N_{0}}=8dB$}
\centering
\begin{tabular}{p{2cm}|p{3cm}|p{3cm}|p{3cm}}
Algorithm &DP & BARB & Reduced Complexity, Order 3\\
\hline
Iteration & Constant for all iterations &Constant&1 2 3 4
\\
\hline\hline
MULS & $68360$& $61$& $312$ $292$ $273$ $238$\\
LUT & $128$& $24$& $147$ $134$ $123$ $102$\\
\hline
\end{tabular}
\label{tab:complexity_8psk_reduced}
\end{table*}
It should be noted, that the PER performance of the Unlimited algorithm, for small enough $\epsilon$, is
as good as the PER performance of the DP algorithm because the mixture algorithm tracks all the
significant trajectories with no limit on the mixture order. The choice of the threshold $\epsilon$ in the
algorithm is according to the level of distortion allowed for the reduced mixture with respect to the
original mixture. If $\epsilon$ is very close to zero, then there will not be any components close enough
and the mixture will not be reduced. Therefore, there is a tradeoff between complexity and accuracy in the
selection of this parameter. This tradeoff is illustrated in Fig. \ref{fig:avg_comps8_eps_1_4}, where we
have plotted the mean mixture order for the unlimited algorithm using $\epsilon=1$ and $\epsilon=4$. It
should be noted that for these values and chosen SNRs, the unlimited algorithm has the same PER levels for
both $\epsilon$. However, choosing $\epsilon=15$ with the same algorithm will increase the PER. Therefore,
choosing the threshold too low might increase the mixture order with no actual need.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{8psk_005_mean_hyp_eps_1_and_4.eps}\\
\caption{8PSK Mean Number of Tikhonov Mixture Components - Unlimited Algorithm}\label{fig:avg_comps8_eps_1_4}
\end{figure}
\section{Discussion}
In this paper we have presented a new approach for joint decoding and estimation of LDPC coded
communications in phase noise channels. The proposed algorithms are based on the approximation of SPA
messages using Tikhonov mixture
canonical models. We have presented an innovative approach for mixture dimension reduction which keeps
accuracy levels high and is low complexity. The decoding scheme proposed in this contribution is shown via
simulations to significantly reduce the computational complexity of the best known decoding algorithms,
while keeping PER levels very close to the optimal algorithm (DP).
Moreover, we have presented a new insight to the underlying dynamics of phase noise estimation using
Bayesian methods. We have shown that the estimation algorithm can be viewed as trajectory tracking, thus
enabling the development of the mixture reduction and clustering algorithms which can be viewed as PLLs.
\appendices
\section{Proof of the CMVM Theorem}
\label{sec:cmvm_thr}
Let $f(\theta)$ be any circular distribution defined on $[0,2\pi)$ and
$g(\theta)$ a Tikhonov distribution.
\begin{equation}\label{22}
g(\theta) = \frac{e^{Re[\kappa e^{-j(\theta-\mu)}]}}{2\pi I_{0}(\kappa)}
\end{equation}
We wish to find,
\begin{equation}\label{KL1}
[\mu^{*},\kappa^{*}] = arg\min_{\mu,\kappa} D_{KL}(f||g)
\end{equation}
According to the definition of the KL divergence,
\begin{equation}\label{23}
D_{KL}(f||g) = -h(f)-\int^{2\pi}_{0}f(\theta)\log g(\theta)d\theta
\end{equation}
where the differential entropy of the circular distribution
$f(\theta)$, $h(f)$ does not affect the optimization,
\begin{equation}\label{KL2}
[\mu^{*},\kappa^{*}] = arg\max_{\mu,\kappa}\int^{2\pi}_{0}f(\theta)\log g(\theta)d\theta
\end{equation}
After the insertion of the Tikhonov form into (\ref{KL2}), we get
\begin{equation}\label{KL3}
[\mu^{*},\kappa^{*}] = arg\max_{\mu,\kappa}\int^{2\pi}_{0}f(\theta)Re[\kappa e^{-j(\theta-\mu)}]d\theta\ -\log{2\pi I_{0}(\kappa)}
\end{equation}
Rewriting (\ref{KL3}) as an expectation and maximizing over $\mu$ only,
\begin{equation}\label{24}
\mu^{*} = arg\max_{\mu} \kappa \mathbb{E}(Re[ e^{-j(\theta-\mu)}])
\end{equation}
Using the linearity of the expectation and real operators,
\begin{equation}\label{KL4}
\mu^{*} = arg\max_{\mu} \kappa Re[\mathbb{E}(e^{j(\theta-\mu)})]
\end{equation}
We can view (\ref{KL4}) as an inner product operation and therefore,
the maximal value of $\mu$ is obtained, according to the
Cauchy-Schwartz inequality, for
\begin{equation}\label{25}
\mu^{*} = \angle{\mathbb{E}(e^{j(\theta)})}
\end{equation}
Now we move on to finding the optimal $\kappa$, using the fact that we
found the optimal $\mu$.
For $\mu^{*}$, the optimal $g(\theta)$ needs to satisfy
\begin{equation}\label{diff0}
\frac{\partial D(f||g)}{\partial \kappa} = 0
\end{equation}
After applying the partial derivative to (\ref{KL3}), and using
\begin{equation}\label{26}
\frac{dI_{0}(\kappa)}{d\kappa}\ = \frac{I_{1}(\kappa)}{I_{0}(\kappa)}
\end{equation}
We get,
\begin{equation}\label{27}
\mathbb{E}(Re[e^{-j(\theta-\mu^{*})}]) =
\frac{I_{1}(\kappa^{*})}{I_{0}(\kappa^{*})}
\end{equation}
Recalling (\ref{circ_mu}) and (\ref{circ_var}), we get that the
optimal Tikhonov distribution $g(\theta)$ is given by matching its
circular mean and variance to the circular mean and circular variance
of the distribution $f(\theta)$.
\qed
\section{Using The \textsf{CMVM} Operator to Cluster Tikhonov Mixture Components}
\label{sec:cmvm_exmp}
In algorithms 1 \& 2, at each clustering iteration, a set $J$ of mixture components indices of the input
Tikhonov mixture (\ref{in}) is selected. The corresponding mixture components are clustered using the CMVM
operator. In this appendix we will explicitly compute the application of the CMVM operator and introduce
several approximations to speed up the computational complexity.
For simplicity, assume that the mixture components in the set $J$ are,
\begin{equation}\label{mix_tikh_11}
f^{J}(\theta_{k})=\sum_{l \in J}^{|J|}\alpha_{l}\frac{e^{Re[Z_{l}e^{-j\theta_{k}}]}}{2\pi I_{0}(|Z_{l}|)}
\end{equation}
Using Theorem (\ref{mix_tikh_thr}) and skipping the algebraic details,
the \textsf{CMVM} operator for (\ref{mix_tikh_11}), is:
\begin{equation}\label{cmvm_res}
\textsf{CMVM}(f^{J}(\theta_{k})) =\frac{e^{Re[Z^{f}_{k}e^{-j\theta_{k}}]}}{2\pi I_{0}(|Z^{f}_{k}|)}
\end{equation}
where
\begin{equation}\label{Zf}
Z_{k}^{f} = \hat{k}e^{j\hat{\mu}}
\end{equation}
and
\begin{equation}\label{tikh_modify_eqs_mu}
\hat{\mu} = \arg{{\sum_{l \in J}^{|J|}\alpha_{l}\frac{I_{1}(|Z_{l}|)}{I_{0}(|Z_{l}|)}e^{j\arg(Z_{l})}}}
\end{equation}
\begin{equation}\label{tikh_modify_eqs_k}
\frac{1}{2 \hat{k}} = 1 - \sum_{l \in J}^{|J|}\alpha_{l}\frac{I_{1}(|Z_{l}|)}{I_{0}(|Z_{l}|)}Re[e^{j(\hat{\mu}-arg(Z_{l}))}]
\end{equation}
Since implementing a modified bessel function is computationally prohibitive, we present the following
approximation,
\begin{equation}\label{assumps3}
\log(I_{0}(k)) \approx k-\frac{1}{2}\log(k)-\frac{1}{2}\log(2\pi)
\end{equation}
which holds for $k>2$, i.e. reasonably narrow distributions.
Using the following relation,
\begin{equation}\label{assumps1}
I_{1}(x) = \frac{dI_{0}(x)}{dx}
\end{equation}
We find that,
\begin{equation}\label{fracI1I0}
\frac{I_{1}(k)}{I_{0}(k)} = \frac{d}{dk}(\log(I_{0}(k)))
\end{equation}
Therefore
\begin{equation}\label{eq5}
\frac{I_{1}(k)}{I_{0}(k)} \approx 1-\frac{1}{2k}
\end{equation}
Thus, the approximated versions of (\ref{tikh_modify_eqs_k}) and (\ref{tikh_modify_eqs_mu}) are
\begin{equation}\label{tikh_modify_eqs_mu_simp}
\hat{\mu} = \arg[{{\sum_{l \in J}^{|J|}\alpha_{l}(1-\frac{1}{2|Z_{l}|})e^{j\arg(Z_{l})}}}]
\end{equation}
\begin{equation}\label{tikh_modify_eqs_k_simp}
\frac{1}{2 \hat{k}} = 1 - \sum_{l \in J}^{|J|}\alpha_{l}(1-\frac{1}{2|Z_{l}|})\cos(\hat{\mu}-\arg(Z_{l}))
\end{equation}
We also use the approximation for the modified bessel function in the
computation of $\alpha_{l}$.
For a small enough $\epsilon$, $\cos(\hat{\mu}-\arg(Z_{l})) \approx 1$, thus one can further reduce the
complexity of (\ref{tikh_modify_eqs_k_simp})
\begin{equation}\label{tikh_modify_eqs_k_simp1}
\frac{1}{\hat{k}} = \sum_{l \in J}^{|J|}\alpha_{l}\frac{1}{|Z_{l}|}
\end{equation}
which coincides with the computation of a variance of a Gaussian mixture.
\section{Computation of the KL divergence between two Tikhonov Distributions}
\label{sec:kl_comp}
In this section we will provide the computation of the KL divergence between two Tikhonov distributions,
which is a major part of both mixture reduction algorithms. We will also provide approximations used to
better the computational complexity of this computation.
Suppose two Tikhonov distributions $g_{1}(\theta)$ and $g_{2}(\theta)$, where
\begin{equation}\label{g1}
g_{1}(\theta) = \frac{e^{Re[z_{1}e^{-j\theta}]}}{2\pi I_{0}(|z_{1}|)}
\end{equation}
\begin{equation}\label{g2}
g_{2}(\theta) = \frac{e^{Re[z_{2}e^{-j\theta}]}}{2\pi I_{0}(|z_{2}|)}
\end{equation}
We wish to compute the following KL divergence,
\begin{equation}\label{28}
D_{KL}(g_{1}(\theta) || g_{2}(\theta))
\end{equation}
which is,
\begin{equation}\label{29}
D_{KL} = \int^{2\pi}_{0}g_{1}(\theta) \log(\frac{e^{Re[z_{1}e^{-j\theta}]}I_{0}(|z_{2}|)}{e^{Re[z_{2}e^{-j\theta}]}I_{0}(|z_{1}|)})d\theta
\end{equation}
Thus,
\begin{equation}\label{30}
D_{KL} = \log(\frac{I_{0}(|z_{2}|)}{I_{0}(|z_{1}|)}) + \int^{2\pi}_{0}g_{1}(\theta) Re[z_{1} - z_{2}e^{-j\theta}]d\theta
\end{equation}
After some algebraic manipulations, we get
\begin{multline}\label{31}
D_{KL} = \log(\frac{I_{0}(|z_{2}|)}{I_{0}(|z_{1}|)}) + \\ \frac{I_{1}(|z_{1}|)}{I_{0}(|z_{1}|)}(|z_{1}|-|z_{2}|cos(\angle z_{1} - \angle z_{2}))
\end{multline}
Using (\ref{eq5}) and (\ref{assumps3}) we get
\begin{multline}\label{kl_comp_full}
D_{KL} \approx |z_{2}|(1-cos(\angle z_{1} - \angle z_{2}))- \\ \frac{1}{2}\log(\frac{|z_{2}|}{|z_{1}|})+\frac{|z_{2}|}{2|z_{1}|}cos(\angle z_{1} - \angle z_{2})
\end{multline}
\section{Proof of Mixture Reduction as Multiple PLLs}
\label{sec:pll}
In this section we will prove the claim presented in section \ref{sec:phase_tracking}, that under certain
channel conditions, the mixture reduction algorithms can be viewed as multiple PLLs tracking the different
phase trajectories. For reasons of simplicity, will only show the case where the mixture reduction
algorithm converges to a single PLL (the generalization for more than one PLL is trivial, as long as there
are no splits).
As described earlier, we model the forward messages as Tikhonov mixtures. Suppose the $m^{th}$ component
is,
\begin{equation}\label{pf_k}
p^{m}_{f}(\theta_{k-1}) = \frac{e^{Re[z^{k-1,f}_{m} e^{-j\theta_{k-1}}]}}{2\pi
I_{0}(|z^{k-1,f}_{m}|)}
\end{equation}
then using (\ref{pf}), we get a Tikhonov mixture $f(\theta_{k})$,
\begin{equation}\label{pf_k_mul_pd}
f(\theta_{k}) = \sum_{i=1}^{M}\alpha_{i}f_{i}(\theta_{k})
\end{equation}
where,
\begin{equation}\label{z_k_1}
f_{i}(\theta_{k}) = \frac{e^{Re[\tilde{z}^{k-1,f}_{m,i}e^{-j\theta_{k}}]}}{2\pi
I_{0}(|\widetilde{z}^{k-1,f}_{m,i}|)}
\end{equation}
\begin{equation}\label{tmp_z}
\tilde{z}^{k-1,f}_{m,i} = \frac{(z^{k-1,f}_{m} +
\frac{r_{k-1}x_{i}^{*}}{\sigma^2})}{1+\sigma^{2}_{\Delta}|(z^{k-1,f}_{m} +
\frac{r_{k-1}x_{i}^{*}}{\sigma^2})|}
\end{equation}
and $x_{i}$ is the $i^{th}$ constellation symbol.
We insert (\ref{pf_k_mul_pd}) into the mixture
reduction algorithms. Assuming slowly varying phase noise and high SNR, such that the mixture
reduction will cluster all the mixture components, with non negligible
probability, to one Tikhonov distribution. Then, the \emph{circular} mean, $\hat{\theta}_{k}$, of the
clustered
Tikhonov distribution is computed according to,
\begin{equation}\label{mu_k_1}
\hat{\theta}_{k} = \angle \mathbb{E} (e^{j\theta_{k}})
\end{equation}
where the expectation is over the distribution $f(\theta_{k})$.
We note that for every complex valued scalar $z$, the following holds
\begin{equation}\label{im_log}
\angle{z} = \Im(\log{z})
\end{equation}
where $\Im$ denotes the imaginary part of a complex scalar.
If we apply (\ref{im_log}) to (\ref{mu_k_1}) we get,
\begin{equation}\label{im_log1}
\hat{\theta}_{k} = \Im{\left(\log{\sum_{i=1}^{M}\alpha_{i}\frac{\widetilde{z}^{k-1,f}_{m,i}}{|\widetilde{z}^{k-1,f}_{m,i}|}}\right)}
\end{equation}
which can be rewritten as,
\begin{equation}\label{im_log2}
\hat{\theta}_{k} = \Im{\left(\log{\sum_{i=1}^{M}\alpha_{i}\frac{z^{k-1,f}_{m} +
\frac{r_{k-1}x_{i}^{*}}{\sigma^2}}{|{z^{k-1,f}_{m} +
\frac{r_{k-1}x_{i}^{*}}{\sigma^2}}|}}\right)}
\end{equation}
we denote,
\begin{equation}\label{33}
G_{k-1} = |z^{k-1,f}_{m} +
\frac{r_{k-1}x_{i}^{*}}{\sigma^2}|
\end{equation}
and assume that $G_{k-1}$, the conditional causal MSE of the
phase estimation under mixture component $f_{i}(\theta_{k})$, is constant for all significant components.
Then,
\begin{equation}\label{34}
\hat{\theta}_{k} \approx \hat{\theta}_{k-1} +
\Im\left(\log\left({\sum_{i=1}^{M}\alpha_{i}\left(1+\frac{r_{k-1}x_{i}^{*}}{G_{k-1}z^{k-1,f}_{m}\sigma^2}\right)}\right)\right)
\end{equation}
where,
\begin{equation}\label{theta}
\hat{\theta}_{k-1} = \angle z^{k-1,f}_{m}
\end{equation}
\begin{equation}\label{35}
\hat{\theta}_{k} \approx \hat{\theta}_{k-1} +
\Im\left(\log\left({1+\frac{r_{k-1}}{G_{k-1}z^{k-1,f}_{m}\sigma^2}\left(\sum_{i=1}^{M}\alpha_{i}x_{i}^{*}\right)}\right)\right)
\end{equation}
We will define $c_{soft}$ as the soft decision symbol using the significant components,
\begin{equation}\label{36}
c_{soft} = \sum_{i=1}^{M}\alpha_{i}x_{i}
\end{equation}
Since we assume high SNR and small phase noise variance, then the
tracking conditional MSE will be low, i.e $|z^{k,f}_{1}|$ will be high.
Using the fact that for small angles $\phi$,
\begin{equation}\label{37}
\angle(1+\phi) \approx \Im(\phi)
\end{equation}
Therefore,
\begin{equation}\label{log_simp}
\hat{\theta}_{k} \approx \hat{\theta}_{k-1} +
\Im(\frac{r_{k-1}c^{*}_{soft}}{G_{k-1}z^{k-1,f}_{m}\sigma^2})
\end{equation}
Which, again for small angles $x$, $sin(x) \approx x$,
\begin{equation}\label{pll_equiv}
\hat{\theta}_{k} \approx \hat{\theta}_{k-1} +
\frac{|r_{k-1}||c^{*}_{soft}|}{G_{k-1}|z^{k-1,f}_{m}|\sigma^2}(\angle{r_{k-1}}+\angle{c^{*}_{soft}}-\hat{\theta}_{k-1})
\end{equation}
\bibliographystyle{plain}
\bibliography{strings}
\end{document} | {"config": "arxiv", "file": "1306.3693/final.tex"} |
\begin{document}
\maketitle
\begin{abstract}
\noindent
In this article we show that extendability from one side of a simple analytic curve
is a rare phenomenon in the topological sense in various spaces of functions. Our
result can be proven using Fourier methods combined with other facts or by complex
analytic methods and a comparison of the two methods is possible.
\end{abstract}
\noindent AMS Classification $n^o$: 42A16, 30H10\\
\noindent Key words and phrases: extendability, real analytic functions, Fourier
series, Cauchy transform, Michael's selection theorem, Borel's theorem, Baire's
theorem, generic property.
\section{Introduction}
In [2] it is considered the space $C^\infty([0,1])$ endowed with its natural Frechet
topology. It is proven that the set $A$ of functions $f\in C^\infty([0,1])$
nowhere analytic contains a dense and $G_\delta$ subset of $C^\infty([0,1])$.
In the present paper we strengthen the above result by showing that the set $A$ is
it self $G_\delta$. Furthermore, we prove a similar result in the space of
$C^\infty$ periodic functions, that is in $C^\infty(T)$, where $T$ is the unit
circle.
In order to do this we combine the fact that non extendability is a generic
phenomenon for the space $A^\infty(D)$, where $D$ is the unit disk ([8]), with some
elements of Fourier Analysis. More precisely, using the Cauchy transform,
$C^\infty(T)$ is a direct sum of $A^\infty(D)$ and another space $Y$. This space
$Y$ contains functions holomorphic outside the open unit disk, which extend smoothly
on $T$. A careful observation of the above facts enables us to prove that generically
all functions in $C^\infty(T)$ are nowhere extendable from neither side of the
unit circle $T$. We notice that extendability from one only side of the domain of
definition has already been considered in [4] and the references therein. There one
investigates sufficient conditions so that, if all derivatives of a function at a
point of the boundary vanish one can conclude that the one side extension is
identically equal to zero. It is, as far as we know, the first time where the
phenomenon of one-side extendability is noticed to be a rare phenomenon.
Next, we tried to extend the previous results from the whole circle $T$ to a subarc
of it (or equivalently to a segment). To do this we combined the previous result
with a theorem of Borel ([7]) saying that any complex sequence appears as the
sequence
of derivatives at $0$ of a $C^\infty$ function and with a simple version of
Michael's selection theorem [11]. The latter allows us to assign in a continuous way
a $C^\infty$ function $f$ to every complex sequence $a_n$, $n=0,1,2,...$ so that
$f^{(n)}(0)=a_n$ for all n. It was also necessary to localize a necessary and
sufficient condition for non extendability of holomorphic functions in the open
disk $D$. More precisely, it is known [8] that a holomorphic function $f\in H(D)$ is
nowhere extendable if and only if
$R_\zeta=dist(\zeta,\partial D=T)$ for all $\zeta\in D$, where
$R_\zeta$ demotes the radius of convergence of the Taylor expansion of f with center
$\zeta$.
The localized version of it is the following: A holomorphic function $f\in H(D)$
is not extendable at any open set containing $1\in T=\partial D$ if and only if
$R_\zeta\leq |\zeta-1|$ for all $\zeta\in D$ (or equivalently for a denumerable
subset of $\zeta$'s in D clustering to 1).
In this way, the Fourier method, combined with other facts, yields the results for
any circular arc, segment, the circle T, the real line $\mathbb{R}$ for the space
$C^\infty$ and $C^k. k=0,1,2,...$, where $C^0$ denotes the space of continuous
functions and on some variations of these spaces. Furthermore, composing with
a conformal map, we can transfer the above result at any injective analytic curve
(replacing T, $\mathbb{R}$ or [0,1]). We also prove that, if a domain $\Omega$ in
$\mathbb{C}$ is bounded by a finite number of disjoint analytic Jordan curves, then
generically in $A^\infty(\Omega)$ every function f is nowhere real analytic on the
boundary of $\Omega$. When we say real analytic we mean with respect to a conformal
parameter of the boundary analytic curves $\gamma_i$, $i=1,2,...,n$.
But then naturally the following question arises. What happens if we parametrize
$\gamma_i$ in a different way, for instance, by the arc length parameter? In order to
answer this it has been proven in [9] that the arc length parameter is a global
conformal parameter for any analytic curve, thus, nothing changes if we use the arc
length parameter.
At the last section of the present paper we present a complex analytic approach,
based
on the Hardy space $H^\infty(D)$ and Montel's theorem, which yields all previous
results in a much simpler and natural way. Thus, although in general the complex
method works in less cases than the real-harmonic analysis method, but with simpler
and less combinatorial arguments, in the present case the complex method initially
was going much further than the Fourier method and with a much more natural and easy
way. In order to push the Fourier method to give almost as many results as the
complex
method, we are led to combine it with Michael's selection theorem, with Borel's
theorem and to localize non extendability results from [8]. The procedure is not
simple but clearly interesting and beautiful.
\section{Preliminaries}
\begin{prop}
Let $L\subset \mathbb{C}$ be a closed set and $\Omega\subset \mathbb{C}$ be an open
set. Let $f: \Omega\rightarrow \mathbb{C}$ be a continuous function on $\Omega$,
which is holomorphic at $\Omega \setminus L$. Then, if $L$ satisfies one of the
following conditions i), ii), iii), iv) and v), automatically it follows that $f$ is
holomorphic at whole $\Omega$.
\begin{enumerate}[(i)]
\item $L=\left\{x+ig(x): x\in [a,b]\right\}$, where $g:[a,b]\rightarrow \mathbb{R}$
is continuous and of bounded variation (that is $L$ has finite length) and $a<b$.
\item $L$ is the boundary of a convex body in $\mathbb{C}$
\item $L$ is a straight line or circle or segment or circular arc.
\item $L$ is an analytic Jordan curve or a compact simple analytic arc.
\item $L$ is a conformal image of a set of the previous types.
\end{enumerate}
\end{prop}
\begin{proof}
i) Let $z_0 \in L$. We denote the open square of center $z_0$ and side $r>0$ as
$E(z_0,r)$. We can find $r_0 > 0$ such that
$\partial E(z_0,r_0))\setminus L$ has two connected components $E_1$ and
$E_2$. We denote by $\gamma$ the boundary $\partial E(z_0,r_0)$ of $E(z_0,r_0)$,
and define the function
$$F(z)=\dfrac{1}{2\pi i} \int_{\gamma}{\dfrac{f(w)}{w-z}dw},$$ which is holomorphic
at $\mathbb{C}\setminus \gamma$. We will prove that $F(z)=f(z)$ for $z\in E(z_0,r)$
and thus $f$ will be holomorphic at $E(z_0,r)$. Since $z_0$ was arbitrary at $L$ we
will have proved that $f$ is holomorphic at $\Omega$.
In order to prove that $F(z)=f(z)$ for $z\in E(z_0,r)$ we first notice that
$$F(z)=\dfrac{1}{2\pi i} \int_{\gamma_1}{\dfrac{f(w)}{w-z}dw}+\dfrac{1}{2\pi i}
\int_{\gamma_2}{\dfrac{f(w)}{w-z}dw},$$ where $\gamma_1=(\overline{E(z_0,r)}\cap L) +
E_1$, $\gamma_2=(\overline{E(z_0,r)}\cap L) + E_2$ and $\gamma_1, \gamma_2$
are counterclockwise oriented.
Let $z\in E(z_0,r)\setminus L$ and $\varepsilon>0$.
If z belongs to the interior of
$\gamma_1$, then we will prove that
$$\dfrac{1}{2\pi i}\int_{\gamma_2}{\dfrac{f(w)}{w-z}dw}=0.$$
We can find a $t_0 \in \mathbb{R}$ such that
$f$ is holomorphic at the interior of
$\gamma_{2,t_0}=\gamma_2+it_0$, where $"+"$ means translation in $\mathbb{C}$, and
the curves $\gamma_{2,\lambda t_0}=\gamma_2+i \lambda t_0$ do not intersect
$\gamma_2$ for $0<\lambda \leq 1$. For this it is sufficient to choose $t_0 >0$ or
$t_0<0$.
From the uniform continuity of $\dfrac{f(w)}{w-z}$ at any compact set uniformly
away from z we can find a $0<\lambda_0 \leq 1$ such that
$$\left|\dfrac{f(w)}{w-z}-\dfrac{f(w+ \lambda t_0)}{w+ \lambda t_0-z}\right|\leq
\dfrac{\pi \varepsilon}{M}$$ for $0 \leq \lambda \leq \lambda_0$ and $w\in \gamma_2$,
where $M<\infty$ is the length of $\gamma_2$. Then
\begin{align*}
\left|\dfrac{1}{2\pi i} \int_{\gamma_2}{\dfrac{f(w)}{w-z}dw}-\dfrac{1}{2\pi i}
\int_{\gamma_{2,\lambda t_0}}{\dfrac{f(w)}{w-z}dw}\right|=\dfrac{1}{2\pi}\left|
\int_{\gamma_2}{\Big(\dfrac{f(w)}{w-z}-{\dfrac{f(w+\lambda t_0)}
{w+\lambda t_0-z}\Big)dw}}\right|\leq \\ \leq \dfrac{1}{2\pi}\int_{\gamma_2}
{\left|\dfrac{f(w)}{w-z}-\dfrac{f(w+\lambda t_0)}{w+\lambda t_0-z}\right|dw}
\leq \dfrac{1}{2\pi} \int_{\gamma_2}\dfrac{\pi\varepsilon}{M}dw
=\dfrac{1}{2\pi}\dfrac{\pi\varepsilon}{M}M=\dfrac{\varepsilon}{2}<\varepsilon
\end{align*}
for every $0<\lambda \leq \lambda_0$.
Thus $$\lim_{\substack{\lambda\to 0 \\ \lambda>0}}\dfrac{1}{2\pi i}
\int_{\gamma_2,\lambda t_0}{\dfrac{f(w)}{w-z}dw}=
\dfrac{1}{2\pi i}\int_{\gamma_2}{\dfrac{f(w)}{w-z}dw}.$$
But by Cauchy's Integral Formula
$$\dfrac{1}{2\pi i}\int_{\gamma_{2,\lambda t_0}}{\dfrac{f(w)}{w-z}dw}=0$$ for every
$0 < \lambda \leq 1$. Similarly,
$$\lim_{\substack{\lambda \to 0 \\ \lambda>0}}\dfrac{1}{2\pi i}
\int_{\gamma_{1,\lambda s_0}}{\dfrac{f(w)}{w-z}dw}=
\dfrac{1}{2\pi i}\int_{\gamma_1}{\dfrac{f(w)}{w-z}dw}$$. But by Cauchy's Integral
Formula
$$\dfrac{1}{2\pi i}\int_{\gamma_1,\lambda s_0}{\dfrac{f(w)}{w-z}dw}=f(z)$$
for suitable $s_0$ and every $0<\lambda \leq 1$. Consequently
$$F(z)=f(z).$$
A similar argument shows that $F(z)=f(z)$ for every z in the interior of the curve
$\gamma_2$. Finally, it follows from the continuity of $F,f$ that,
if $z\in E(z_0,r)\cap L$, then $F(z)=f(z)$.
ii) If L is the boundary of a convex body, then its projection to the x-axis is
a compact interval $[a,b], a<b$. A vertical line $x=x_0$, $a<x_0<b$ intersects L at
two points and in this way two continuous functions $g,h$ on $(a,b)$ are defined,
where $g(x)\leq h(x)$ for $x\in (a,b)$. Both $g,h$ can be extended continuously
at $a,b$ and their graphs have finite length, since $g$ is convex and $h$ concave.
The vertical lines $x=a$ and $x=b$ intersect L at a point or at a vertical segment.
For the points $(x_0,g(x_0))$ and $(x_0,h(x_0))$, where $a<x_0<b$, case i) holds
and thus $f$ will be holomorphic at those points.
If L contains a vertical segments, then the proof of case i) can be used to prove
the desired for the interior points of the segment. It remains a finite number of
points, at least 2 and up to 4. By Riemann's theorem on removable isolated
singularities $f$ will also be continuous at those points.
Therefore, $f$ will be holomorphic at $\Omega$.
iii) It follows from cases i) and ii).
iv) For an analytic Jordan curve there exists
a conformal function which maps the curve to a circle and an open neighbourhood of
the curve to an open neighbourhood of the circle. The same holds true for a compact
simple analytic arc and an arc. The result follows now from case iii).
v) Obvious.
\end{proof}
The previous conditions are sufficient conditions. M. Papadimitrakis informed us
that a sufficient and necessary condition is that the continuous analytic capacity
$a(L)$ [5] is equal to $0$.
\section{The Fourier method on the circle and non extendability}
The basic result of this section is that generically at
$C^k(T)$ every function is not extended holomorphically both
at the interior and the exterior of the unit circle $T$.
Another result is that generically in $C^k(T)$
every function is not holomorphically extendable at any open disk
intersecting the unit circle $T$.
We note that this is the first time it is proved that the phenomenon
of holomorphic extension from one side is topologically rare.
For one-sided holomorphic extensions we refer to
[4] and at the articles contained at its bibliography.\\
Let $\Omega\subset\mathbb{C},\Omega\neq \mathbb{C}$ be a domain and
$z_0\in \partial\Omega$ for which there is a $\delta>0$ such that every open set
$D(z_0,r)\cap \Omega,r\leq \delta$ is connected (*).
The space of holomorphic functions at $\Omega$ is
denoted by $ H(\Omega)$. The topology of $H(\Omega)$
is defined by the uniform convergence on compact subsets. The space
$H(\Omega)$ is now a Frechet space. From now on, the symbols $\Omega,z_0$ are as
above, unless something else is stated.
\begin{defi}
Let $\Omega \subset \mathbb{C},\Omega\neq \mathbb{C}$ be a domain,
$f\in H(\Omega)$ and $z_0\in\partial\Omega$.
The function $f$ belongs to the class $U_1(\Omega,z_0)$ if there is no open
disk $D(z_0,r),r>0$ and there is no holomorphic function $F:D(z_0,r)
\rightarrow \mathbb{C}$ such that $F|_{\Omega\cap D(z_0,r)}=
f|_{\Omega\cap D(z_0,r)}$.
\end{defi}
If $\Omega \subset \mathbb{C},\Omega\neq \mathbb{C}$ is a domain,
$f\in H(\Omega)$, with $R_\zeta\in(0,+\infty)$ we will denote
the radius of convergence of the Taylor development of $f$ of center
$\zeta$, which always satisfies $R_\zeta\geq dist(\zeta,\partial\Omega)$.
Our purpose is to characterize the class $U_1(\Omega,z_0)$.
\begin{prop}
Let $\Omega\subset\mathbb{C},\Omega\neq \mathbb{C}$ be a domain,
$z_0\in \partial\Omega$ such that (*) holds and $f \in H(\Omega)$.
Then the following are equivalent:
\begin{enumerate}[(i)]
\item $f\in U_1(\Omega,z_0)$
\item for every $w$ in $\Omega\cap D(z_0,\delta)$ the radius of
convergence $R_w\leq |w-z_0|$.
\item for every $z$ in a dense subset $A$ of $\Omega\cap D(z_0, \delta)$
the radius of convergence $R_z\leq |z-z_0|$.
\item for some $z_n\in \Omega\cap D(z_0,\delta),n=1,2,3,...$ converging to
$z_0$ the radius of convergence $R_{z_n}\leq |z_n-z_0|$.
\end{enumerate}
\end{prop}
\begin{proof}
$(i)\Rightarrow (ii)$. We assume that $R_w>|w-z_0|$ for some
$w\in \Omega\cap D(z_0,\delta)$ and we notice that $z_0\in D(w,R_w)$.
Since $\Omega\cap D(z_0,\delta)$ is open there exists a $s>0$ such
that $D(z_0,s)\subset D(w,R_w) $.
Let $G$ be the holomorphic function in $D(w,R_w)$ defined by
the Taylor series of $f$ at $w$. Then $G$ coincides
with $f$ at the open disk $D(w,dist(w,\partial\Omega))$ and by analytic continuation
coincides with $f$ in $D(z_0,s)\cap \Omega$, because $D(z_0,s)\cap \Omega$ is also
connected. Besides, $G$ is
holomorphic in $D(z_0,s)$ and therefore $f$
does not belong to the class $U_1(D,z_0)$, which is absurd.
$(ii)\Rightarrow(i)$.If $(i)$ does not hold,
then there is an open disk
$D(z_0,s),0<s\leq \delta$ and a holomorphic function
$F:D(z_0,s) \rightarrow \mathbb{C}$
such that $F_{/\Omega\cap D(z_0,s)}=f_{/\Omega\cap
D(z_0,s)}$. Let $t$ be a point in $\Omega$ such that
$t\neq z_0,|t-z_0|<\dfrac{s}{2}$,
then, since $F^{(l)}(t)=f^{(l)}(t)$ for every $l=0,1,2,...$ the
Taylor Series of center $t$ of $F$ coincides with
the Taylor Series of center $t$ of $f$.
Hence $R_t\geq dist(t,\partial D(z_0,s))
\geq s-\dfrac{s}{2}=\dfrac{s}{2}$, but on the other hand
$R_t=dist(t,\partial \Omega)<\dfrac{s}{2}$, which is absurd.
$(ii)\Rightarrow(iii)$.Obvious.
$(iii)\Rightarrow(i)$. Same as the direction $(ii)\Rightarrow(i)$, because
for every $s>0$ there is a $z\in A$ such that $|z-z_0|<\dfrac{s}{2}$.
$(ii)\Rightarrow(iv)$.Obvious.
$(iv)\Rightarrow(i)$. Same as the direction
$(ii)\Rightarrow(i)$, because, if $z_n\in \Omega\cap D(z_0,\delta)$
converges to $z_0$, then for every $s>0$ there is an $m \in
\left\{1,2,3,...\right\}$ such that $|z_m-z_0|<\dfrac{s}{2}$.
\end{proof}
\begin{defi}
Let $\Omega\subset\mathbb{C},\Omega\neq \mathbb{C}$ be a domain,
$z_0\in \Omega$ and $z\in \Omega$. A function
$f \in H(\Omega)$ belongs to the class $F(\Omega,z,z_0)$ if
the radius
of convergence of the Taylor
expansion of f of center $z$ is less than or equal to $|z-z_0|$.
\end{defi}
We will now try to find a nice description for the class
$F(\Omega,z,z_0)$, but previously we remind from [8] that if
$\Omega\subset\mathbb{C},\Omega
\neq \mathbb{C}$ is a domain and
$z\in \Omega$ then the set of functions which are
holomorphic exactly in $\Omega$
is a dense and $G_\delta$ subset of $H(\Omega)$.
\begin{prop}
Let $\Omega\subset\mathbb{C},\Omega\neq \mathbb{C}$ be a domain,
$z_0\in \partial\Omega$ such that (*) holds and $z\in \Omega\cup D(z_0,\delta)$.
Then the class $F(\Omega,z,z_0)$
is a dense and $G_\delta$ in $H(\Omega)$.
\end{prop}
\begin{proof}
Since $z_0\in\partial\Omega$,
if a function $f\in H(\Omega)$ is holomophic exactly in
$\Omega$ then the radius
of convergence of the Taylor
expansion of f with center $z$ is exactly
equal to $dist(z,\partial \Omega)$ and thus
it is also less than or equal to
$|z-z_0|$. Therefore, the above set is subset of $F(\Omega,z,z_0)$
and thus $F(\Omega,z,z_0)$ is dense in $H(\Omega)$.
Let $d_z =|z-z_0|>0$. Since the radius of convergence
$$R_z(f) =\dfrac{1}{\limsup\limits_n \sqrt[n]{\dfrac{|f^{(n)}(z)|}
{n!}}},$$ we can deduce that
$$F(\Omega,z,z_0)=\bigcap\limits_{s=1}^\infty \bigcap\limits_
{n=1}^\infty \bigcup\limits_{k=n}^\infty
\left\{f\in H(\Omega): \dfrac{1}{d_z}-\dfrac{1}{s}<
\sqrt[k]{\dfrac{|f^{(k)}(z)|}
{k!}}\right\}.$$
From the continuity of the map $H(\Omega)\ni f\rightarrow
f^{(k)}(z)\in \mathbb{C},k=1,2,...$ it follows that
$F(\Omega,z,z_0)$ is $G_\delta$ and the proof is complete.
\end{proof}
We now have all the prerequisites to prove the next theorem:
\begin{theorem}
Let $\Omega \subset \mathbb{C},\Omega\neq \mathbb{C}$ be a domain
and $z_0\in\partial\Omega$ such that (*) holds.
Then the set of functions which are non extendable at $z_0$,
$U_1(\Omega,z_0)$,
is a dense and $G_\delta$ subset of $H(\Omega)$.
\end{theorem}
\begin{proof}
Let $A=(\mathbb{Q}+i\mathbb{Q})\cap \Omega$, which is a countable
dense subset of $\Omega$. From Proposition 3.2
the set $\bigcap\limits_{\zeta\in A}F(\Omega,\zeta,z_0)=U_1(\Omega,z_0)$
and from Baire's theorem it is a dense and $G_\delta$ subset of $H(\Omega)$.
\end{proof}
We will now use Theorem 3.5 to prove some nice results.
The open unit disk and the unit circle will be denoted by
$$D=\left\{z\in \mathbb{C}:|z|<1\right\}$$ and
$$T=\left\{e^{it}: t\in \bbR\right\}$$ respectively.
\begin{remark}
We can easily see that for $\Omega=D,z_1\in D$ and
$\Omega=\mathbb{C}\setminus \overline D,z_2\in \mathbb{C}\setminus
\overline D$ the open sets $D(z_1,r_1)\cap D$ and
$D(z_1,r_1)\cap \mathbb{C}\setminus
\overline D$ are connected. Thus Proposition 3.2 holds true
for the domains $D,\mathbb{C}\setminus \overline D$ and
for every $z_0 \in T.$
\end{remark}
A function $u:T \rightarrow \mathbb{C}$ belongs to the space $C(T)=C^0(T)$ if it
is continuous. The topology of $C(T)$
is defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{\theta}^l}\right|$$
The same topology can be defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{(e^{i\theta})^l}}\right|$$
A function $u:T \rightarrow \mathbb{C}$ belongs to the
space $C^k(T), k=1,2,...$ if it is k times continuously
differentiable with respect to $\theta\in\bbR$. The topology of $C^k(T)$
is defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{\theta}^l}\right|,l=0,1,2...,k$$
The same topology can be defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{(e^{i\theta})^l}}\right|,l=0,1,2...,k$$
The above spaces are Banach spaces and thus Baire's Theorem is at our disposal.
A function $u:T \rightarrow \mathbb{C}$ belongs to the
space $C^\infty(T)$ if it is infinitely differentiable with
respect to $\theta\in\bbR$. The topology of $C^\infty(T)$
is defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{\theta}^l}\right|,l=0,1,2...$$
Also, the same topology can be defined by the seminorms
$$\sup\limits_{\theta \in \mathbb{R}}\left|\dfrac{d^lf(e^{i\theta})}
{d{(e^{i\theta})^l}}\right|,l=0,1,2...$$
Equivalently the same topology
can be both defined by the seminorms
$$\sum\limits_{n \in \mathbb{Z}} {|a_n|},\sum\limits_{n
\in \mathbb{Z}\setminus \left\{0\right \}}
|n|^l|a_n|, l=1,2,3,...,$$ and the seminorms
$$\sup\limits_{n \in \mathbb{Z}}|a_n|,\sup\limits_{n \in \mathbb{Z}
\setminus \left\{0\right \}}|n|^l|a_n|,l=1,2,3,...,$$ where
$$a_n=\hat{u}(n)=\dfrac{1}{2\pi}\int_{0}^{2\pi}{f(e^{i\theta})e^{-in
\theta}d\theta}$$ is the Fourier coefficient of $f \in C^\infty(T)$.
Moreover, one can easily see that a continuous
function $u:T \rightarrow \mathbb{C}$
is of class $C^\infty(T)$ if and only if its
Fourier coefficients $a_n$,$n \in \mathbb{Z}$ satisfy
$P(|n|)a_n \rightarrow 0$ for every polynomial $P$
when $n$ approaches $\pm \infty$.
An easy result of the above is that
trigonometric polynomials are dense in $C^\infty(T)$,
since $$\sum\limits_{n=-N}^N{a_ne^{in\theta}}\xrightarrow[\
N\to \infty]{} \sum\limits_{n=-\infty}^\infty{a_ne^{in\theta}}$$ for every
$f(e^{i\theta})=\sum\limits_{n=-\infty}^\infty{a_ne^{in\theta}}\in C^\infty(T)$.
Another interesting space is the space of holomorphic functions at D, for which
every derivative extends continuously at T, which will be denoted as $A^\infty(D)$.
The topology of $A^\infty(D)$ is defined by the seminorms
$$\sup\limits_{z\in D}\left| {\dfrac{d^lf(z)} {dz^l}}\right|,l=0,1,2,...,$$
Equivalently, it can be defined by the seminorms
$$\sup\limits_{\theta \in \bbR}\left|{\dfrac{d^lf(e^{i\theta})}{d\theta^l}}
\right|,l=0,1,2,...$$ If $$f(z)=\sum\limits_{n=0}^\infty {a_nz^n},|z|<1,$$ then
the topology of $A^\infty(D)$ is also
induced by the seminorms $$\sum\limits_{n=0}^\infty {|a_n|},\sum
\limits_{n=0}^\infty |n|^l|a_n|,l=1,2,3,...$$
In addition, the same topology is defined by the
seminorms $$\sup\limits_{n=0,1,...}|a_n|,\sup\limits_{n =1,2,...}
|n|^l|a_n|,l=1,2,3,...$$
Also, as above, if $f(z)=\sum\limits_{n=0}^
\infty{a_nz^n},|z|<1$, then $f$ belongs to $A^\infty(D)$
if and only if $P(n)a_n \rightarrow 0$, as $n \rightarrow \infty$
for every polynomial $P$. It follows that a continuous function
$u:T\rightarrow \mathbb{C}$ is extended as $f\in A^\infty(D)$ if and only if
$\hat{u}(n)=0 $ $\forall n<0$ and $u\in C^\infty(T)$. Since
$C^\infty(T)$ and $A^\infty(D)$ are Frechet
spaces, Baire's theorem is at our disposal. Moreover,
it is easy to see that polynomials are dense in $A^\infty(D)$,
although the same question is open in the case
of an arbitrary simply connected $\Omega\subset \mathbb{C}$, such that
$(\mathbb{C}\cup\left\{\infty \right\})
\setminus \overline {\Omega}$ is connected and ${\overline{\Omega}}^o=\Omega$.
At this moment, we define the classes
$A_0^\infty(D)=\left\{f\in A^\infty(D): f(0)=0\right \}$ and
$$A_0^{\infty}((\mathbb{C}\setminus \overline{D})\cup\left\{\infty\right\})=
\left\{f\in A^\infty(\mathbb{C}\setminus \overline{D}):
\lim\limits_{z\rightarrow\infty} {f(z)}=0\right \},$$
which will be needed afterwards.
Let $u:T \rightarrow \mathbb{C}$, then the functions
$$f(z)=\sum\limits_{n=0}^\infty{a(n)z^n}$$ and
$$g(z)=\sum\limits_{n=-\infty}^{-1}\overline{a(n)}z^{-n}$$
belong to the classes $A^\infty(D)$ and $\overline{A_0^\infty(D)}$
respectively and $u=f|_{T}+\overline{g|_{T}}$. Since
${A^{\infty}(D)}\cap \overline{{A_{0}^{\infty}(D)}}=\left\{0\right\}$
we have that $$ C^{\infty}(T)= {A^{\infty}(D)}|_{T}
\oplus \overline{{A_{0}^\infty(D)}}|_{T},$$ because also
$\pi_1 : C^\infty(T) \rightarrow A^\infty(D)$,
$\pi_2 : C^\infty(T) \rightarrow A_{0}^\infty(D)$, $\pi_1(u)=f
,\pi_2(u)=g$, where $f,g$ as above, are continuous.
Lets see for example the continuity of $\pi_1$.
If $u\in C^\infty(T)$ and the sequence $u_m\in C^\infty(T)$,
$m=1,2,...$ converges to $u$, then
$$\lim\limits_{m \rightarrow \infty}{\sum\limits_{n\in \bbZ}
{|a_m(n)-a(n)|}}=0,$$ $$\lim\limits_{m \rightarrow \infty}{\sum
\limits_{n \in \mathbb{Z} \setminus \left\{0\right \}}
|n|^l|a_m(n)-a(n)|}=0,$$ $l=1,2,3,...$. Consequently,
$$\sum\limits_{n=0}^\infty {|a_m(n)-a(n)|}\leq\sum\limits_{n\in \bbZ}
{|a_m(n)-a(n)|}\xrightarrow[\
m\to \infty]{} 0,$$
$${\sum\limits_{n=1}^\infty |n|^l|a_m(n)-a(n)|}\leq{\sum\limits_{n
\in \mathbb{Z}\setminus \left\{0\right \}}
|n|^l|a_m(n)-a(n)|}\xrightarrow[\
m\to \infty]{} 0,$$ $l=1,2,3,...$. Thus we deduce that
$\pi_1(u_n)$ converges to $\pi_1(u)$ and that $\pi_1$ is
continuous. Similarly we can see that $\pi_2$ is continuous.
If we think about $h=f+\bar{g}$, we will notice that the function
$h$ is harmonic in $D$
and, if every differential operator
$$L_{k,l}=\dfrac{{\partial}^l}{\partial z^l}\dfrac{{\partial}^k}
{\partial \bar{z}^k}$$ is applied on $h$, it defines
a harmonic function in $D$ which extends
continuously in $\overline{D}$. Thus the space $H_\infty(D)$ is
the space of harmonic functions
$h: D \rightarrow \mathbb{C}$ such that $L_{k,l}(h)$ extends
continuously in $\overline{D}$.
We see that for $k\geq 1$ and $l\geq 1$
$L_{k,l}(h)=0$, since $h$ is harmonic.
The topology of $H_\infty(D)$ is defined
by the seminorms $$\sup\limits_{|z|\leq 1}{|L_{k,l}(h)(z)|},k,l\geq
0,$$ and equivalently, either by the seminorms
$$\sum\limits_{n \in \bbZ}
{|a(n)|},\sum\limits_{n
\in \mathbb{Z}\setminus \left\{0\right \}}
|n|^l|a(n)|,l=1,2,3,...$$ or by the seminorms $$\sup\limits_{n
\in \mathbb{Z}}|a(n)|,\sup\limits_{n \in \mathbb{Z}
\setminus \left\{0\right \}}|n|^l|a(n)|,l=1,2,3,...$$
In this way, the space $H_{\infty}(D)$ becomes
Frechet space. Also, we can see that
$$H_{\infty}(D)|_{T}= {A^{\infty}(D)}|_{T}
\oplus \overline{{A_{0}^\infty(D)}}|_{T}=C^{\infty}(T).$$
However, the aforementioned analysis is not the only interesting analysis of
$C^\infty(T)$. Let $u:T \rightarrow \mathbb{C}$, then the functions
$$f(z)=\sum\limits_{n=0}^\infty{a(n)z^n}$$ and
$$g(z)=\sum\limits_{n=-{\infty}}^{-1}{a(n)z^n},$$
where $f,g$ belong to the classes
$A^\infty(D)$ and $A_0^{\infty}((\mathbb{C}
\setminus \overline{D})
\cup\left\{\infty\right\})$ respectively and
$u=f|_{T}+{g|_{T}}$.
Also, we notice that
$$f(z)=\dfrac{1}{2\pi i}\int_{T}{\dfrac{u(w)}{w-z}dw},|z|<1$$ and
$$g(z)=\dfrac{1}{2\pi i}\int_{T}{\dfrac{u(1/w)}{w-1/z} dw}-\hat{u}(0),|z|>1.$$
We point out that we already know that $f$ and $g$ can be extended
continuously at $T$, so the functions $f|_{T}$ and $g|_{T}$
are defined as limits of the above integral expressions.
Since $A^\infty(D)\cap A_{0}^\infty((\mathbb{C}\setminus
\overline{D})\cup\left\{\infty\right\})=\left\{{0}\right\}$
and the functions
$p_1 :C^\infty(T) \rightarrow A^\infty(D)$,
$p_2 :C^\infty(T) \rightarrow A_{0}^\infty((\mathbb{C}\setminus
\overline{D})\cup\left\{\infty\right\})$,$p_1(u)=f$,$p_2(u)=g$,
where $f,g$ as above, are continuous,
we have that $$ C^{\infty}(T)={A^{\infty}(D)}|_{T}
\oplus {A_{0}^{\infty}((\mathbb{C}
\setminus \overline{D})\cup\left\{\infty\right\})}|_{T}.$$
We return now to $C^k(T)$.
\begin{defi}
Let $z_0\in T$,$k=0,1,2,...$ or $\infty$. A function $u \in C^k(T)$ belongs to the
class $U_2(T,z_0,k)$ if there is no pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:|z|> 1\right\}$, $r>0$
and a continuous function $\lambda:T\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda|_{D(z_0,r)\cap T}=u|_{D(z_0,r)\cap T}.$
\end{defi}
\begin{defi}
Let $z_0\in T$,$k=0,1,2,...$ or $\infty$. A function $u \in C^k(T)$ belongs to the
class $U_3(T,z_0,k)$ if there is no pair of a domain $P(z_0,r)=D(z_0,r)\cap D,r>0$
and a continuous function $h:T\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap T}=u|_{D(z_0,r)\cap T}.$
\end{defi}
\begin{defi}
Let $z_0\in T$,$k=0,1,2,...$ or $\infty$. A function $u \in C^k(T)$ belongs to the
class $U_4(T,z_0,k)$ if there are neither a pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:|z|> 1\right\},r>0$
and a continuous function $\lambda:T\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda|_{D(z_0,r)\cap T}
=u|_{D(z_0,r)\cap T}$ nor pair of a domain $P(z_0,r)=D(z_0,r)\cap
D,r>0$ and a continuous function $h:T\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap T}=u|_{D(z_0,r)\cap T}.$
\end{defi}
\begin{remark}
At the definitions 3.7, 3.8, 3.9, if $u\in C^\infty(T)$ automatically follows that
every derivatives of $\lambda$ and $h$ is
continuous at the sets $(D(z_0,r)\cap T)\cup\Omega(z_0,r)$,
$(D(z_0,r)\cap T)\cup P(z_0,r)$ respectively.
To see that, if for a $u\in C^\infty(T),u=f|_{T}+g|_{T}$
there is an open disk $D(z_0,r)$ and $\lambda$ as in Definition
1.7, then $f$ extends continuously at $D(z_0,r)$ and
holomorphically at $\Omega(z_0,r)$. From a well-known result $f$
extends holomorphically at $D\cup D(z_0,r)$ and, since every derivative of
$g$ extends continuously at $T$, every
derivative of $\lambda$ is continuous at $(D(z_0,r)\cap T)\cup\Omega(z_0,r)$.
Similarly we deduce the corresponding result for $h$.
\end{remark}
The classes $\bigcap\limits_{z_0 \in T} U_2(T,z_0,k)$,
$\bigcap\limits_{z_0 \in T}U_3(T,z_0,k)$,
$\bigcap\limits_{z_0 \in T}U_4(T,z_0,k)$ will be denoted
as $U_2(T,k)$,
$U_3(T,k)$, $U_4(T,k)$ respectively, where $k=0,1,2,...$ or
$\infty$.
If $u \in C^k(T) ,k\geq 2$ or $\infty$, then the functions
$$f(z)=\sum\limits_{n=0}^\infty{a(n)z^n}$$ and
$$g(z)=\sum\limits_{n=-{\infty}}^{-1}{a(n)z^n}$$
are holomorphic at $D, \mathbb{C}\setminus \overline{D}$ respectively,
continuous at T and $u=f|_{T}+g|_{T}$.
Since $g$ is holomorphic at $\mathbb{C} \setminus \overline{D}$,
if $u$ extends continuously at $D(z_0,r)\cap
\left\{z\in \mathbb{C}:|z|\geq1\right\}$
and holomorphically at
$D(z_0,r)\cap$ $\left\{z\in \mathbb{C}:|z|> 1\right\}$, where $z_0 \in T,r>0$ then
$f|_T=u-g|_T$ extends holomophically at $D(z_0,r)$.
On the other hand, it is obvious that, if $f$ extends
holomorphically at $D(z_0,r)$,
then $u$ extends continuously at $D(z_0,r)\cap\left\{z\in \mathbb{C}:
|z|\geq 1\right\}$ holomorphically at
$D(z_0,r)\cap\left\{z\in \mathbb{C}:|z|> 1\right\}$. On the following theorems,
when we use $f,g$ are as above.
\begin{theorem}
Let $z_0 \in T$. The class $U_2(T,z_0,\infty)$ is a dense and $G_\delta$
subset of $C^\infty(T)$.
\end{theorem}
\begin{proof}
From the above and Propositions 3.2 and 3.4
$$U_2(T,z_0)=\bigcap\limits_{l=1}^\infty \bigcap\limits_{s=1}^\infty
\bigcap\limits_{n=1}^\infty \bigcup\limits_{k=n}^\infty
\left\{u\in C^\infty(T): \dfrac{1}{d_{z_l}}-\dfrac{1}{s}<
\sqrt[k]{\dfrac{|f^{(k)}(z_l)|}{k!}}\right\},$$ where $z_l\in D$ is a dense sequence
of $D$.
We will at first prove that the sets $$A(l,s,k)=\left\{u\in C^\infty(T):
\dfrac{1}{d_{z_l}}-\dfrac{1}{s}<\sqrt[k]{\dfrac{|f^{(k)}(z_l)|}{k!}}\right\}$$
are open. Let $u_p,p=1,2,3,...$ be a sequence at $C^\infty(T)$ converging to a
$u \in A(l,s,k)$ at the topology of $C^\infty(T)$. Then
\begin{align*}
|f_p(e^{i\theta})-f(e^{i\theta})|
\leq \sum\limits_{i=0}^\infty |a_p(i)-a(i)|=\sum\limits_{i=1}^\infty\dfrac{1}{i^2}
|a^{''}_p(i)-a^{''}(i)|\leq \\ \leq(\sum\limits_{i=1}^\infty\dfrac{1}{i^2})
\sup\limits_{\theta\in \mathbb{R}}\left|\dfrac{d^2(u_p(e^{i\theta})-u(e^{i\theta}))}
{d\theta^2}\right|
\xrightarrow[\
p\to \infty]{} 0
\end{align*}
where $a^{''}_p(i),a^{''}(i)$ are the Fourier coefficients of
$\dfrac{d^2u_p(e^{i\theta})}{d\theta^2},\dfrac{d^2u(e^{i\theta})}{d\theta^2}$
respectively. Thus $f_p$ converges uniformly to $f$ at T and from maximum modulus
principle, $f_p$ converges uniformly to $f$ at $\overline {D}$. Therefore, there
exists a $n_0\in \mathbb{N}$ such that $$\dfrac{1}{d_{z_l}}-\dfrac{1}{s}<\sqrt[k]
{\dfrac{|f_p^{(k)}(z_l)|}{k!}}$$ for every $p\geq n_0$. Thus,
$u_p\in A(l,s,k)$ for $p\geq n_0$. It follows that the set $A(l,s,k)$ is
open at $C^\infty(T)$ and the class $U_2(T,z_0)$ is $G_\delta$ subset of
$C^\infty(T)$.
From [8] we can prove that $U_2(T,z_0,\infty)$ is also dense at $C^\infty(T)$.
Indeed,
let $v\in C^\infty(T)$,$v=f|_T+g|_T$ then $f$ belongs to $A^\infty(D)$ and thus
there is a sequence of functions $f_n \in A^\infty(D),n=1,2,3,...$ which are also
holomorphic exactly at $D$ converging to $f$ at the topology of $A^\infty(D)$. Then
${f_n}|_T+g|_T$ converges to $v$ at the topology of $C^\infty(T)$.
But ${f_n}|_T+g|_T \in U_2(T,z_0)$ and now we can conclude that the class
$U_2(T,z_0)$ is dense subset of $C^\infty(T)$. The proof is complete.
\end{proof}
\begin{theorem}
Let $z_0 \in T$,$k=0,1,2,...$. The class $U_2(T,z_0,k)$ is a dense and $G_\delta$
subset of $C^k(T)$.
\end{theorem}
\begin{proof}
For $k\geq 2$ the same procedure as at Theorem 3.11 shows that the
class $U_2(T,z_0,k)$ is a $G_\delta$ subset of $C^k(T)$. The space $C^\infty(T)$ is
dense at every $C^k(T)$. To see that, let $u\in C^k(T)$. Then the function
$\dfrac{du^k}{{d\theta}^k}$ is continuous and thus there exists a sequence
$u_n \in C^\infty(T)$ converging to $\dfrac{du^k}{{d\theta}^k}$. We can assume that
$a_0(u_n)=0$, since
$a_0(u_n) \to a_0\left(\dfrac{du^k}{{d\theta}^k}\right)=0$.
Then the sequence of functions
$$U_n(e^{it})= \int_0^t{u_n(e^{i\theta})d\theta}+c$$ converges to
$$\int_0^t{\dfrac{du^k(e^{i\theta})}{{d\theta}^k}}d\theta+c=
\dfrac{du^{k-1}(e^{it})}{{d\theta}^{k-1}}$$
for a constant $c\in\mathbb{C}$. If we repeat this procedure we will prove the
desired.
The class $U_2(T,z_0,\infty)$ is dense at
$C^\infty(T)$ and the class $U_2(T,z_0,k)$ contains the class
$U_2(T,z_0,\infty)$. Therefore,
the class $U_2(T,z_0,k)$ is a dense subset of $C^k(T)$.
Let $C^k_0(T),k=0,1,2,..$ be the set of functions $u\in C^k(T)$ such that $a_0=0$.
Then the above procedure shows that the class $U_2(T,z_0,2)\cap C^2_0(T)$ is a
$G_\delta$ subset of $C^2_0(T)$. To see that it is also a dense subset of
$C^2_0(T)$, let $v\in C_0^2(T)$,$v=f|_T+g|_T$.
Then $f$ belongs to $A^\infty(D)$ and thus
there is a sequence of functions $f_n \in A^\infty(D),n=1,2,3,...$ which are also
holomorphic exactly at $D$ converging to $f$ at the topology of $A^\infty(D)$. The
sequence
${f_n}|_T+g|_T$ converges to $v$ at the topology of $C^\infty(T)$.
But the sequence ${f_n}|_T+g|_T -a_0(f_n)$ also converges to $v$ and belongs to
$U_2(T,z_0)\cap C_0^2(T)$ and now we can conclude that the class
$U_2(T,z_0)\cap C_0^2(T)$ is dense subset of $C_0^\infty(T)$. The function
$\varphi:C^2_0(T)\rightarrow C^1_0(T)$,
$\varphi(u)=\dfrac{du}{d\theta}$ is continuous, linear, 1-1 and onto
$C^1_0(T)$. From the open mapping theorem its inverse is also continuous.
It is easy to see that
\begin{equation}
\varphi(U_2(T,z_0,2)\cap C^2_0(T))=U_2(T,z_0,1)\cap C^1_0(T),
\end{equation}
which combined with
the nice properties of $\varphi$ and the fact that the class
$U_2(T,z_0)\cap C_0^2(T)$
is a dense $G_\delta$ subset of $C_0^2(T)$ indicates that the class
$U_2(T,z_0,1)\cap C^1_0(T)$ is a dense $G_\delta$ subset of $ C^1_0(T)$.
The function $F:C^1(T)\rightarrow C^1_0(T), F(u)=u-a_0$ is continuous, linear and
onto $C^1_0(T)$ and thus open map. Also,
$$U_2(T,z_0,1)=F^{-1}(U_2(T,z_0,1)\cap C^1_0(T)).$$ Since $F$ is continuous, the
class $U_2(T,z_0,1)$ is $G_\delta$ subset of $C^1(T)$ and since $F$ is an open map,
it is also dense at $C^1(T)$. By applying the last method once again, it can be
proved that the class $U_2(T,z_0,0)$ is a dense and $G_\delta$ subset of $C(T)$.
\end{proof}
\begin{remark}
In order to prove (1) we use the fact that if $f:D \rightarrow \mathbb{C}$ is
holomorphic and I is an open arc in $T$ and f extends continuously on $D\cup I$ and
if $f|_I \in C^1(I)$, then the derivative $\dfrac{df}{d\theta}$ extends continuously
on $D\cup I$ as well. This fact is true and its proof is complex analytic. Thus,
Fourier method works for $C^\infty$ and $C^p(T)$, $p\geq 2$ without a complex
analytic help, while for $p=0,1$ we need something from complex analysis.
\end{remark}
\begin{theorem}
Let $z_0 \in T$, $k=0,1,2,...$ or $\infty$. The class $U_3(T,z_0,k)$ is a dense and
$G_\delta$ subset of $C^k(T)$.
\end{theorem}
\begin{proof}
Let $\varphi :C^\infty(T)\rightarrow C^\infty(T)$,$\varphi(f)=F$
where $F: T\rightarrow \mathbb{C}$,$F(z)=f(1/z)$ for $z\in T$.
Then obviously $\varphi$ is 1-1,onto $C^\infty(T)$,continuous and
$\varphi={\varphi}^{-1}$. Also, $u\in U_2(T,\overline{z_0})$ if and only if $
\varphi(u)\in U_3(T,z_0)$, and thus $\varphi(U_2(T))=U_3(T)$. Since the class
$U_2(T,z_0)$ is a dense and $G_\delta$ subset of $C^\infty(T)$, it follows from the
above
that $U_3(T,z_0)$ is a dense and $G_\delta$ subset of $C^\infty(T)$. At the same way
it can be proved that the class $U_3(T,z_0)$ is a dense and $G_\delta$ subset of
$C^k(T)$.
\end{proof}
Since $U_4(T,z_0,k)=U_3(T,z_0,k)\cap U_2(T,z_0,k)$ and the classes $U_2(T,z_0,k)$
and $U_3(T,z_0,k)$ are dense and $G_\delta$ subsets of $C^k(T)$, it follows from
Baire's theorem the next theorem:
\begin{theorem}
Let $z_0 \in T$, $k=0,1,2,...$ or $\infty$. The class $U_4(T,z_0,k)$ is a dense and
$G_\delta$ subset of $C^k(T)$.
\end{theorem}
We will now define another class, which we will prove that shares the same
properties with the above classes.
\begin{defi}
Let $z_0 \in T$, $k=0,1,2,...$ or $\infty$. A function $u\in C^k(T)$ belongs to
the class $U_5(T,z_0,k)$ if
there is no pair of disk $D(z_0,r),r>0$ and holomorphic function
$f:D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap T}=u|_{D(z_0,r)\cap T}$.
\end{defi}
We will use the notation $U_5(T,k)$ for the class
$\bigcap\limits_{z_0\in T} U_5(T,z_0,k)$.
We can easily see now that the class $U_5(T,z_0,k)$
contains $U_2(T,z_0,k)$, which is a dense and $G_\delta$ set.
Although this is a satisfying result, we will prove something even stronger.
\begin{theorem}
Let $z_0 \in T$, $k=0,1,2,...$ or $\infty$. Then class $U_5(T,z_0,k)$
is a dense and $G_\delta$ subset of $C^k(T)$.
\end{theorem}
\begin{proof}
We will first begin with $C^\infty(T)$. A function $u\in C^\infty(T)$,
$u=f_{|T}+g_{|T}$ belongs to the class $U_5(T,z_0,\infty)$ if and only if
$f$ belongs to $U_1(D,z_0)$ and $g$ to $U_1(\mathbb{C}\setminus \overline{D},z_0)$.
From the proof of Proposition 3.2 we can see that there exists an open disk
$D(z_0,r)$ such that, for every $z\in D(z_0,r)\cap D$ $f$ does not belong to the
class $F(D,z,z_0)$ and for every
$w\in D(z_0,r)\cap (\mathbb{C}\setminus \overline D)$ $g$ does not belong to the
class $F(\mathbb{C}\setminus \overline{D},w,z_0)$.
Let $z_l\in D,l=1,2,3,...$,$w_l\in \mathbb{C}\setminus
\overline D,l=1,2,3,... $ be sequences converging to $z_0$.
Since both $z_l,w_l$ converge to $z_0$, there is an $m=1,2,3,...$ such that $f$ does
not belong to $F(D,z_m,z_0)$ and $g$ does not belong to $F(\mathbb{C}\setminus
\overline{D},w_m,z_0)$.
Thus, $${U_5(T,z_0,\infty)}^\mathsf{c}=\bigcup\limits_{l=1}^\infty \bigcup
\limits_{s=1}^\infty \bigcup\limits_{n=1}^\infty
\bigcap\limits_{k=n}^\infty E(l,s,k), $$
where $$E(l,s,k)=\left\{u\in C^\infty(T): \dfrac{1}
{d_{z_l}}-\dfrac{1}{s}\geq \sqrt[k]{\dfrac{|f^{(k)}(z_l)|}{k!}}, \dfrac{1}{d_{w_l}}-
\dfrac{1}{s}\geq\sqrt[k]{\dfrac{|g^{(k)}(w_l)|}{k!}}\right\}.$$
The method used at Theorem 3.11 proves that the class
${U_5(T,z_0,\infty)}^\mathsf{c}$ is $F_\sigma$ subset of $C^\infty(T)$ with empty
interior. Equivalently the class $U_5(T,z_0)$ is a dense and $G_\delta$ subset of
$C^\infty(T)$. Continuing as at Theorem 3.12, we can prove that the class
$U_5(T,z_0,k)$ has the desirable property. The proof is complete.
\end{proof}
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(T,k),$ $U_3(T,k),$ $U_4(T,k)$ and
$U_5(T,k)$ are dense and $G_\delta$ subsets of $C^k(T)$.
\end{theorem}
\begin{proof}
Let $A$ be a countable dense subset of $T$. Then it holda that
$U_2(T,k)=\bigcap\limits_{z_0\in A} U_2(T,z_0,k)$,
$U_3(T,k)=\bigcap\limits_{z_0\in A} U_3(T,z_0,k)$,
$U_4(T,k)=\bigcap\limits_{z_0\in A} U_4(T,z_0,k)$ and
$U_5(T,k)=\bigcap\limits_{z_0\in A} U_5(T,z_0,k)$
and Baire's Theorem completes the proof.
\end{proof}
\section{Non extendability from any side of an analytic curve}
\subsubsection*{The closed arc}
Let $I=\left\{e^{it}:a\leq t \leq b
\right\}$,$0\leq a<b\leq 2\pi$ be a compact arc of T.
A function $u:I\rightarrow \mathbb{C}$ belongs to the space $C^k(I),k=0,1,2,...$
or $\infty$,
if it is continuous and k times continuously differentiable. The topology
of $C^k(I)$ is defined by the seminorms
$$\sup\limits_{\theta \in [a,b]}|\dfrac{d^lf(e^{i\theta})}
{d{\theta}^l}|,l=0,1,2...,k$$
We can prove from Borel's theorem
[6] that every $f\in C^\infty (I)$ extends at $C^\infty (T)$.
Indeed, if $$p_n=\dfrac{1}{n!}\dfrac{d^{n}f(e^{ia})}
{d{\theta}^{n}},q_n=\dfrac{1}{n!}\dfrac{d^{n}f(e^{ib})}
{d{\theta}^{n}},n=0,1,2,...$$ we can find $g:[b,a+2\pi]\rightarrow \mathbb{C}$ which
is infinitely differentiable at $[b,a+2\pi]$
such that $g^{(n)}(b)= q_n n!,g^{(n)}(a+2\pi)=p_n n!,n=0,1,2,...$.
Then the function
$$ F(e^{it}) = \left\{
\begin{array}{ c l }
f(e^{it}), & t\in [a,b] \\
g(t), & t\in [b,a+2\pi] \\
\end{array}
\right. $$
is well defined and belongs to $C^\infty(T)$.
\begin{defi}
Let $z_0 \in I$, $k=0,1,2,...$ or $\infty$. A function $u \in C^k(I)$ belongs to the
class $U_2(I,z_0,k)$ if there is no pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:|z|> 1\right\},z_0\in I,r>0$
and a continuous function $\lambda:I\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda|_{D(z_0,r)\cap I}=u|_{D(z_0,r)\cap I}.$
\end{defi}
\begin{defi}
Let $z_0 \in I$, $k=0,1,2,...$ or $\infty$. A function $u \in C^k(I)$ belongs to the
class $U_3(I,z_0,k)$ if there is no pair of a domain $P(z_0,r)=D(z_0,r)\cap D,r>0$
and a continuous function $h:I\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap I}=u|_{D(z_0,r)\cap I}.$
\end{defi}
\begin{defi}
Let $z_0 \in I$, $k=0,1,2,...$ or $\infty$. A function $u \in C^k(I)$ belongs to the
class $U_4(I,z_0,k)$ if there are neither a pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:|z|> 1\right\},r>0$ and a continuous
function $\lambda:I\cup \Omega(z_0,r)\rightarrow \mathbb{C}$ which is also
holomorphic at $\Omega(z_0,r)$ and satisfies
$\lambda|_{D(z_0,r)\cap T}=u|_{D(z_0,r)\cap I}$
nor a pair of a domain $P(z_0,r)=D(z_0,r)\cap D,r>0$
and a continuous function $h:I\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap I}=u|_{D(z_0,r)\cap I}.$
\end{defi}
\begin{defi}
Let $z_0 \in I$, $k=0,1,2,...$ or $\infty$. A function $u\in C^k(I)$ belongs to
the class $U_5(I,z_0,k)$ if
there is no pair of disk $D(z_0,r),r>0$ and holomorphic function
$f:D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap I}=u|_{D(z_0,r)\cap I}$
\end{defi}
\begin{remark}
At the definitions 4.1, 4.2, 4.3, if $u\in C^\infty(I)$, automatically follows
that every derivative of $\lambda$ and $h$
extends continuously at the sets $(D(z_0,r)\cap I)\cup\Omega(z_0,r)$ and
$D(z_0,r)\cap I)\cup P(z_0,r)$ respectively. This follows easily using Remark 3.10.
\end{remark}
We will use the notation $U_2(I,k),U_3(I,k),U_4(I,k)$ and $U_5(I,k)$
for the classes $\bigcap\limits_{z_0\in I} U_2(I,z_0,k)$,
$\bigcap\limits_{z_0\in I} U_3(I,z_0,k)$,$\bigcap\limits_{z_0\in I}
U_4(I,z_0,k)$ and $\bigcap\limits_{z_0\in I} U_5(I,z_0,k)$.
\begin{theorem}
Let $z_0 \in I$, $k=0,1,2,...$ or $\infty$. The classes
$U_2(I,z_0,k)$, $U_3(I,z_0,k)$, $U_4(I,z_0,k)$ and $U_5(I,z_0,k)$ are dense and
$G_\delta$ subsets of $C^k(I)$.
\end{theorem}
\begin{proof}
We will prove the theorem only for the class $U_2(I,z_0,\infty)$, but the
same method can be used for the other cases too.
Let $\varphi: C^\infty(T)\rightarrow C^\infty(I)$ defined as
$\varphi(u)=u|_I$, $u\in C^\infty(T)$, then
$\varphi$ is continuous, linear, onto $C^\infty(I)$ and
$C^\infty(T)$,$C^\infty(I)$ are Frechet spaces.
Thus, from Michael's theorem [11],there exists a continuous function
$\psi:C^\infty(I)\rightarrow C^\infty(T)$ such that
$\varphi(\psi(u))=u$ for $u\in C^\infty(I)$.
It is easy to see that
$$U_2(I,z_0,\infty)=\psi^{-1}(U_2(T,z_0,\infty))$$ and
$$U_2(I,z_0,\infty)=\varphi(U_2(T,z_0,\infty)).$$
Since $U_2(T,z_0,\infty)$ is dense at $C^\infty(T)$
and $\varphi$ continuous and onto $C^\infty(I)$, $U_2(I,z_0,\infty)$ is
dense at $C^\infty(I)$. Also,
$$U_2(T,z_0,\infty)=\bigcap \limits_{l=1}^\infty G_n,$$ where $G_n,n=1,2,3,...$
are open subsets of $C^\infty(T)$, and
$$U_2(I,z_0,\infty)=\bigcap\limits_{n=1}^\infty \psi^{-1}(G_n)$$.
Since $G_n$ are open subsets of $C^\infty(T)$, from the continuity of $\psi$ we
deduce that the sets $\psi^{-1}(G_n)$ are open in $C^\infty(I)$. We
conclude that $U_2(I,z_0,\infty)$ is a dense and $G_\delta$ subset of $C^\infty(I)$.
\end{proof}
Let $A$ be a countable dense subset of $I$. Combining the fact that the classes
$U_2(I,z_0),U_3(I,z_0)), U_4(I,z_0)$ and $U_5(I,z_0)$ are dense and $G_\delta$
subsets
of $C^\infty(I)$ with the fact that
$U_2(I,k)=\bigcap\limits_{z_0\in A} U_2(I,z_0,k)$, $U_3(I,k)=\bigcap\limits_{z_0\in
A} U_3(I,z_0,k)$, $U_4(I,k)=\bigcap\limits_{z_0\in A} U_4(I,z_0,k)$ and
$U_5(I,k)=\bigcap\limits_{z_0\in A} U_5(I,z_0,k)$
and Baire's theorem we get the next theorem :
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(I,k),U_3(I,k),U_4(I,k)$ and
$U_5(I,k)$ are dense and $G_\delta$ subsets of $C^k(I)$.
\end{theorem}
\subsubsection*{The closed interval}
At next we will extend our results even more.
We consider the compact interval $L=[\gamma,\delta],\gamma<\delta$.
Let $k=0,1,2,...$ or $\infty$. The set of continuous functions $f:L\rightarrow
\mathbb{C}$ which are k times differentiable is denoted by
$C^k(L)$. The topology of the space is defined by the seminorms
$$\sup\limits_{z\in L}{|f^{(l)}(z)|},l=0,1,2,...,k$$. In this way
$C^k(L)$, $k=0,1,2,...$ becomes Banach space and $C^\infty(L)$ Frechet space and
Baire's theorem is at our disposal.
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(L)$ belongs to the class
$U_2(L,k)$ if there is no pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:Im(z)>0 \right\},z_0\in L,r>0$
and a continuous function $\lambda:L\cup \Omega(z_0,r)\rightarrow \mathbb{C}$
which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda|_{D(z_0,r)\cap L}=u|_{D(z_0,r)\cap L}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(L)$ belongs to the class
$U_3(L,k)$ if there is no pair of a domain $P(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:Im(z)<0 \right\},z_0\in L,r>0$
and a continuous function $h:L\cup P(z_0,r)\rightarrow \mathbb{C}$
which is also holomorphic at $P(z_0,r)$ and satisfies
$h|_{D(z_0,r)\cap L}=u|_{D(z_0,r)\cap L}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(L)$ belongs to the class
$U_4(L,k)$ if there are neither a pair of a domain
$\Omega(z_0,r)=D(z_0,r)\cap\left\{z\in\mathbb{C}:Im(z)>0 \right\}$, $z_0\in I,r>0$
and a continuous function $\lambda:L\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda_{/D(z_0,r)\cap L}
=u_{/D(z_0,r)\cap L}$ nor pair of a domain
$P(z_0,r)=D(z_0,r)\cap \left\{z\in\mathbb{C}:Im(z)<0 \right\},r>0$
and a continuous function $h:L\cup P(z_0,r)\rightarrow \mathbb{C}$
which is also holomorphic at $P(z_0,r)$ and satisfies
$h|_{D(z_0,r)\cap L}=u|_{D(z_0,r)\cap L}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u\in C^k(L)$ belongs to
the class $U_5(L,k)$ if
there is no pair of disk $D(z_0,r),z_0\in L,r>0$ and holomorphic function
$f: D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap L}=u|_{D(z_0,r)\cap L}$
\end{defi}
\begin{remark}
At the definitions 4.8, 4.9, 4.10, if $u\in C^\infty(L)$, automatically follows
that every derivative of $\lambda$ and $h$
extends continuously at the sets $(D(z_0,r)\cap L)\cup\Omega(z_0,r)$ and
$D(z_0,r)\cap L)\cup P(z_0,r)$ respectively. This follows easily using Remark 3.10.
\end{remark}
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(L,k)$,$U_3(L,k)$,
$U_4(L,k)$ and $U_5(L,k)$ are dense and $G_\delta$ subsets of $C^k(L)$.
\end{theorem}
\begin{proof}
We consider the function $$\mu: I_0 \rightarrow L,
\mu(z)=(\delta-\gamma)\dfrac{i+1}{i-1}\dfrac{z-1}{z+1}+\gamma,$$
where $I_0=\left\{e^{it}: 0\leq t \leq \dfrac{\pi}{2} \right\}$, which is
continuous, 1-1 and onto $L$.
Thus the function $$\Pi:C^\infty(L) \rightarrow
C^\infty(I_0),\Pi(f)=fo\mu$$
is continuous, 1-1, onto $C^\infty(I_0)$ and has continuous inverse. Also,
$$U_2(L,k)={\Pi}^{-1}(U_2(I_0,k)),$$ which
combined with the nice properties of $\Pi$ and the fact that
$U_2(I_0)$ is a dense and $G_\delta$ subset of $C^\infty(I_0)$
indicates that $U_2(L,k)$ is a dense and $G_\delta$ subset of $C^k(L)$,
We can see using the above function that the same holds true for
the classes $U_3(L,k)$, $U_3(L,k)$, $U_4(L,k)$ and $U_5(L,k)$ .
\end{proof}
\begin{remark}
The fact that the class $U_5(L,\infty)$ is a dense and $G_\delta$ subset of
$C^\infty(L)$ strengthens the result [2].
\end{remark}
\subsubsection*{The line}
Let $k=0,1,2,...$ or $\infty$. At this point we will consider the continuous
functions $f:\mathbb{R}\rightarrow \mathbb{C}$ which are k times continuously
differentiable. This space of functions will be
denoted by $C^k(\mathbb{R})$
and becomes Frechet space with the
topology induced by the seminorms
$$\sup\limits_{z\in [-n,n]}{|f^{(l)}(z)|},l=0,1,2,...,k, n=1,2,3,..$$
We can see that every function $f\in C^\infty([\gamma,\delta])$
extends at $C^\infty(\mathbb{R})$. Indeed, the use of
Borel's theorem help us find
functions $$f_1\in C^\infty([\gamma-1,\gamma]),
f_2\in C^\infty([\delta,\delta+1])$$ such that
$${f_1}^{(n)}(\gamma-1)= 0,{f_1}^{(n)}(\gamma)=
f^{(n)}(\gamma),{f_2}^{(n)}(\delta)=
f^{(n)}(\delta),{f_2}^{(n)}(\delta +1)= 0,
n=0,1,2,...$$
We can extend now $f$ as $$ \tilde{f}(z) = \left\{
\begin{array}{ c l }
f(z), & z\in [\gamma,\delta] \\
f_1(z), & z\in [\gamma-1,\gamma] \\
f_2(z), & z\in [\delta,\delta+1] \\
0, & z\in (-\infty,\gamma]\cup[\delta,+\infty)
\end{array}\right. $$
Then the extension is well defined and belongs to
$C^\infty(\mathbb{R})$. Also, in this case both the extension
and every derivative of the extension are bounded and converge to $0$ as $z$
converges to $\pm \infty$.
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb{R})$ belongs to the
class $U_2(\mathbb{R},k)$ if there is no pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:Im(z)>0\right\},z_0\in \mathbb{R},r>0$
and a continuous function
$\lambda:\mathbb{R}\cup \Omega(z_0,r)\rightarrow \mathbb{C}$ which is also
holomorphic at $\Omega(z_0,r)$ and satisfies
$\lambda|_{D(z_0,r)\cap \mathbb{R}}=u|_{D(z_0,r)\cap \mathbb{R}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb{R})$ belongs to the
class $U_3(\mathbb{R},k)$ if there is no pair of a domain $P(z_0,r)=D(z_0,r)
\cap \left\{z\in\mathbb{C}:Im(z)<0\right\},z_0\in \mathbb{R},r>0$
and a continuous
function $h:\mathbb{R}\cup P(z_0,r)\rightarrow \mathbb{C}$
which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap \mathbb{R}}
=u|_{D(z_0,r)\cap \mathbb{R}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb{R})$ belongs to the
class $U_4(\mathbb{R},k)$ if there are neither a pair of a domain
$\Omega(z_0,r)=D(z_0,r)
\cap\left\{z\in\mathbb{C}:Im(z)>0\right\}$, $z_0\in \mathbb{R}$,$r>0$
and a continuous function $\lambda:L\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda_{/D(z_0,r)\cap \mathbb{R}}
=u_{/D(z_0,r)\cap \mathbb{R}}$ nor pair of a domain
$P(z_0,r)=D(z_0,r)\cap \left\{z\in\mathbb{C}:Im(z)<0\right\},r>0$
and a continuous
function $h:\mathbb{R}\cup P(z_0,r)\rightarrow \mathbb{C}$
which is also holomorphic at
$P(z_0,r)$ and satisfies $h|_{D(z_0,r)\cap \mathbb{R}}
=u|_{D(z_0,r)\cap \mathbb{R}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u\in C^k(\mathbb{R})$ belongs to
the class $U_5(\mathbb{R},k)$ if
there is no pair of disk $D(z_0,r),z_0\in \mathbb{R},r>0$ and
holomorphic function
$f:D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap \mathbb{R}}=u|_{D(z_0,r)\cap \mathbb{R}}$
\end{defi}
\begin{remark}
At the definitions 4.15, 4.16, 4.17, if $u\in C^\infty(\mathbb{R})$, automatically
follows that every derivative of $\lambda$ and $h$
extends continuously at the sets $(D(z_0,r)\cap \mathbb{R})\cup\Omega(z_0,r)$ and
$D(z_0,r)\cap \mathbb{R})\cup P(z_0,r)$ respectively. This follows easily from
Remark 3.10.
\end{remark}
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(\mathbb{R},k)$, $U_3(\mathbb{R},k)$,
$U_4(\mathbb{R},k)$ and $U_5(\mathbb{R},k)$ are dense and $G_\delta$ subsets of
$C^k(\mathbb{R})$.
\end{theorem}
\begin{proof}
We will prove the theorem only for the class $U_2(\mathbb{R},\infty)$ and the other
cases are similar. We consider the continuous linear maps
$$\varphi_{N}: C^\infty(\mathbb{R})\rightarrow
C^\infty([-N,N]),N=1,2,3,...$$ defined as
$$\varphi_{N}(f)=f_{/[-N,N]},$$ which are also
onto $C^\infty([-N,N])$. Since $C^\infty(\mathbb{R}),
C^\infty([-N,N])$ are Frechet spaces, $\varphi_{N}$ are also
open maps.
Then we notice that
$$U_2(\mathbb{R},\infty)=\bigcap\limits_{N=1}^\infty
{\varphi_{N}}^{-1}(U_2([-N,N]),\infty).$$
Since the sets $U_2([-N,N])$ are dense subsets of $C^\infty([-N,N])$ and
$\varphi_{N}$ are open maps, the sets ${\varphi_{N}}^{-1}(U_2([-N,N]))$
are dense subsets of $C^\infty(\mathbb{R})$.
Also, we have proved above that the classes
$U_2([-N,N],\infty)$ are $G_\delta$ subsets of $C^\infty([-N,N])$ respectively.
From the continuity of the functions $\varphi_N$, the classes
${\varphi_{N}}^{-1}(U_2([-N,N])$ are $G_\delta$ subsets of
$C^\infty(\mathbb{R})$.
Consequently, from Baire's theorem the class $U_2(\mathbb{R})$ is a dense and
$G\delta$ subset of $C^\infty(\mathbb{R})$. The proof is complete.
\end{proof}
The essential part of the proof of Theorem 4.20 is that the functions
$\varphi_N$ are continuous linear onto maps between
Frechet spaces. Thus, we can define the space $C_b^k(\mathbb{R})$
as the space of functions $f\in C^k(\mathbb{R})$, where
$f^{(l)},l=0,1,2,...,k$ are bounded. $C_b^\infty(\mathbb{R})$
is a Frechet space.
As we have proved, every function in $C^k([-n,n]),n=1,2,3,...$
can be extended at $C_b^k(\mathbb{R})$. It is not hard
to see that the functions $\varphi_N$, as defined at the proof of
Theorem 4.20 are continuous, linear, onto maps
between Frechet spaces
and thus open maps. Therefore, if we retain the same notation the corresponding
subsets of $C_b^k(\mathbb{R})$ of non-extendable functions, namely
$U_2(\mathbb{R},k),U_3(\mathbb{R},k),U_4(\mathbb{R},k)$ and
$U_5(\mathbb{R},k)$ are
dense and $G_\delta$ subsets of
$C_b^\infty(\mathbb{R},k)$.
Also, the subspace of $C_b^k(\mathbb{R})$,
denoted as $C_0^k(\mathbb{R})$, which contains the functions
$f\in C^k(\mathbb{R})$, where
$\lim\limits_{z\rightarrow \pm \infty} f^{(l)}(z)=0,l=0,1,2,...,k$.
$C_0^k(\mathbb{R})$
is a Frechet space too. If we retain the same notation
the above procedure shows that the corresponding
subsets of $C_0^\infty(\mathbb{R})$ of non-extendable functions, namely
$$U_2(\mathbb{R},k),\ U_3(\mathbb{R},k),\ U_4(\mathbb{R},k)\ \text{and}\
U_5(\mathbb{R},k)$$
are dense and $G_\delta$ subsets of $C_0^k(\mathbb{R})$.
\subsubsection*{The open arc and the open interval}
We return to the cases of the arc $I=\left\{e^{it}:a<t<b \right\},
0\leq a<b \leq 2\pi$ and the interval
$L=(\gamma,\delta),\gamma<\delta$, with the only difference that now
they are open. The space of continuous functions $f:I\rightarrow \mathbb{C}$
which are k times differentiable is denoted by
$C^k(T)$ and the space of continuous functions $g:L\rightarrow
\mathbb{C}$ which are k times differentiable is denoted by
$C^k(L)$. Both spaces become Frechet
spaces with the topology of uniform convergence on compact sets
of the first k derivatives. Since there is an homeomorphism between $I$ or $L$ and
$\mathbb{R}$ which maps a neighborhood of $I$ or $L$ into a neighborhood of
$\mathbb{R}$,
the corresponding subsets of $C^k(I)$ of non-extendable functions, namely
$$U_2(I,k),\ U_3(I,k),\ U_4(I,k)\ \text{and}\ U_5(I,k),$$ and
the corresponding subsets of $C^k(L)$ of non-extendable functions,
$$U_2(L,k),\ U_3(L,k),\ U_4(L,k)\ \text{and}\ U_{5}(L,k),$$ are dense and $G_\delta$
subsets of $C^k(I)$ and $C^k(L)$ respectively.
Similarly to above we define the spaces $$C_b^\infty(I),\
C_b^\infty(L),\ C_0^\infty(I)\
\text{and}\ C_0^\infty(L),$$ which are Frechet spaces.
Once again, if we retain the same notation, the corresponding
subsets of $C_b^k(I)$ of non-extendable functions, namely
$$U_2(I,k),\ U_3(I,k),\ U_4(I,k)\ \text{and}\ U_5(I,k)$$ are
dense and $G_\delta$ subsets of
$C_b^k(I)$, the corresponding
subsets of $C_b^\infty(L)$ of non-extendable functions, namely
$$U_2(L,k),\ U_3(L,k),\ U_4(L,k)\ \text{and}\ U_5(L,k)$$ are dense and $G_\delta$
subsets of $C_b^k(L)$, the corresponding
subsets of $C_b^\infty(I)$ of non-extendable functions, namely
$$U_2(I,k),\ U_3(I,k),\ U_4(I,k)\ \text{and}\ U_5(I,k)$$ are
dense and $G_\delta$ subsets of
$C_b^k(I)$ and the corresponding
subsets of $C_0^\infty(L)$ of non-extendable functions, namely
$$U_2(L,k),\ U_3(L,k),\ U_4(L,k)\ \text{and}\ U_5(L,k)$$ are dense and $G_\delta$
subsets of $C_b^k(L)$.
\subsubsection*{Jordan analytic curves}
According to [1], a Jordan curve $ \gamma $ is called analytic if $ \gamma(t)=\Phi(e^{it}) $ for $ t \in \mathbb{R} $ where $ \Phi : D(0,r,R)\rightarrow \mathbb{C} $ is injective and holomorphic on $ D(0,r,R)=\{ z \in \mathbb{C}: r< |z|<R \} $ with $ 0<r<1<R $.
\\
Let $ \Omega $ be the interior of a curve $ \delta $ which is Jordan analytic curve. Let $ \varphi:D\rightarrow \Omega $ be a Riemann function. Then $ \varphi $ has a conformal extension in an open neighbourhood of $ \overline{D} $, $ W $, and $ \sup\limits_{|z|\leq1} |\varphi^{(l)}(z)|< +\infty$, $ \sup\limits_{|z|\leq1} |(\varphi^{-1})^{(l)}(z)|< +\infty$ for every $ l=0,1,2 \dots $. Let $ \gamma:[0,2\pi]\rightarrow \mathbb{C} $ be a curve with $ \gamma(t)=\varphi(e^{it}) $ for $ t \in [0, 2\pi] $. We extend $ \gamma $ to a $ 2\pi $-periodic function. At the following, when we use the symbols $ \Omega, \gamma, \varphi, W $ , they will have the same meaning as mentioned above, unless something else is denoted.
\begin{defi}
Let $ u : \partial \Omega \rightarrow \mathbb{C}$ be a continuous function. Also, let $ \omega : T\rightarrow \mathbb {C}$ be the function $ \omega=uo \varphi $. Then, for $ k=0,1,2, \dots, +\infty $ we have that $ u \in C^{k}(\partial \Omega) $ if and only if $ w \in C^{k}(T)$ and the function $ h:C^{k}(\partial \Omega)\rightarrow C^{k}(T) $ with $ h(u)=uo\varphi $ is an isomorphism. We denote that the topology of $ C^{k}(\partial \Omega) $ is defined by the seminorms
$$ \sup\limits_{\vartheta\in \mathbb{R}} \left| \frac{d^{l}(u\circ\varphi)}{d\vartheta^{l}} (e^{i\vartheta}) \right|, l=0,1,2,3 \dots, k $$
\\
\end{defi}
Due to the isomorphism $ h $ we can easily generalize some results of the previous paragraphs.
\begin{defi}
For $ k=0,1,2, \dots, +\infty $, a function $u \in C^{k}(\gamma^{*})$ belongs to the class
$U_5(\gamma^{*},k)$ if there is no pair of a $D(z_0,r), z_0\in \gamma^{*}$
and a continuous
function $\lambda:\gamma^{*}\cup D(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$D(z_0,r)$ and satisfies $\lambda_{/D(z_0,r)\cap \gamma^{*}}
=u_{/D(z_0,r)\cap \gamma^{*}}.$ We will simply write $U_5(\gamma^{*})$ for $U_5(\gamma^{*},\infty)$
\end{defi}
\begin{theorem}
For $ k=0,1,2, \dots, +\infty $, the set of the functions $ f \in C^{k}(\partial \Omega) $ which do not belong $U_5(\gamma^{*})$ is a dense and $ G_\delta $ subset of $C^{k}(\partial \Omega)$.
\end{theorem}
\begin{proof}
We will prove that for $ k=+\infty $ and the other cases can be proven in the same way. We know that $ h:C^{\infty}(\partial \Omega)\rightarrow C^{\infty}(T) $ with $ h(u)=uo\varphi $ is an isomorphism. We will prove that $ h(U_5(\gamma^{*}))=U_5(T) $ or equivalently $ h(U_5(\gamma^{*})^c)=U_5(T)^c $. If $ f\in U_5(\gamma^{*})^c $, then there exist $ \gamma(t_0) $ , an open set $ V $ with $\gamma(t_0)\in V \subseteq \varphi(W) $ and a holomorphic function $ F:V \rightarrow \mathbb {C} $ such that $ F|_{V \cap \gamma^{*}}=f|_{V \cap \gamma^{*}} $. Therefore, $ Fo\varphi:\varphi^{-1}(V) \rightarrow \mathbb {C} $ is holomorphic and $ Fo\varphi|_{\varphi^{-1}(V) \cap T}=h(f)|_{\varphi^{-1}(V) \cap T}=fo\varphi|_{\varphi^{-1}(V) \cap T} $. As a result, $ f\in U_5(T)^c $ and $ h(U_5(\gamma^{*}))\subseteq U_5(T) $. If $ f\in U_5(T)^c $, then there exist $ e^{it_0} $ , an open set $ V $ with $e^{it_0}\in V \subseteq W $ and a holomorphic function $ F:V \rightarrow \mathbb {C} $ such that $ F|_{V \cap T}=f|_{V \cap T} $. Thus, $ Fo\varphi^{-1}:\varphi(V) \rightarrow \mathbb {C} $ is holomorphic and $ Fo\varphi^{-1}|_{\varphi(V) \cap \gamma^{*}}=fo\varphi^{-1}|_{\varphi(V) \cap \gamma^{*}} $, $ h(fo\varphi^{-1}|_\gamma^{*})=f $. Therefore, $ f\in h(U_5(\gamma^{*})^c) $ and $ h(U_5(\gamma^{*})\supseteq U_5(T) $. From theorem 3.18 we have that $ U_5(T) $ is a dense and $ G_\delta $ subset of $C^{\infty}(T)$ and therefore $ U_5(\gamma^{*}) $ is a dense and $ G_\delta $ subset of $C^{\infty}(\partial \Omega)$.
\end{proof}
At this point we give some definitions which are generalizations of the definitions of the previous paragraph.
\begin{defi}
For $ k=0,1,2, \dots, +\infty $, a function $u \in C^{k}(\gamma^{*})$ belongs to the class
$U_2(\gamma^{*},k)$ if there is no pair of a domain $\Omega(z_0,r)=D(z_0,r)
\cap B$, where $ B $ is the non-bounded connected component of $ \mathbb {C}\setminus \gamma^{*} $, $z_0\in \gamma^{*},r>0$
and a continuous
function $\lambda:\gamma^{*}\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda|_{D(z_0,r)\cap \gamma^{*}}
=u|_{D(z_0,r)\cap \gamma^{*}}.$
\end{defi}
\begin{defi}
For $ k=0,1,2, \dots, +\infty $, a function $u \in C^{k}(\gamma^{*})$ belongs to the class
$U_3(\gamma^{*},k)$ if there is no pair of a domain $P(z_0,r)=D(z_0,r)\cap
A$, where $ A $ is the bounded connected component of $ \mathbb {C}\setminus \gamma^{*} $, $z_0\in \gamma^{*},r>0$
and a continuous
function $h:\gamma^{*}\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)\cup \gamma^{*} $ and satisfies $h_{/D(z_0,r)\cap \gamma^{*}}
=u_{/D(z_0,r)\cap \gamma^{*}}.$
\end{defi}
\begin{defi}
For $ k=0,1,2, \dots, +\infty $, a function $u \in C^{k}(\gamma^{*})$ belongs to the class
$U_4(\gamma^{*},k)$ if there are neither a pair of a domain $
\Omega(z_0,r)=D(z_0,r)
\cap B$, where $ B $ is the non-bounded connected component of $ \mathbb {C}\setminus \gamma^{*} $,$z_0\in \gamma^{*},r>0$
and a continuous
function $\lambda:\gamma^{*}\cup \Omega(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$\Omega(z_0,r)$ and satisfies $\lambda_{D(z_0,r)\cap \gamma^{*}}
=u_{/D(z_0,r)\cap \gamma^{*}}$ nor a pair of a domain $P(z_0,r)=D(z_0,r)\cap
A$,where $ A $ is the bounded connected component of $ \mathbb {C}\setminus \gamma^{*} $,$r>0$
and a continuous
function $h:\gamma^{*}\cup P(z_0,r)\rightarrow
\mathbb{C}$ which is also holomorphic at
$P(z_0,r)\cup \gamma^{*} $ and satisfies $h_{/D(z_0,r)\cap \gamma^{*}}
=u_{/D(z_0,r)\cap \gamma^{*}}.$
\end{defi}
Now, we have the generalization of the theorems 3.18.
\begin{theorem}
For $ k=0,1,2, \dots, +\infty $, the classes $ U_2(\gamma^{*},k),U_3(\gamma^{*},k),U_4(\gamma^{*},k) $ are dense and $ G_{\delta} $ subsets of $ C^{k}(\gamma^{*}) $.
\end{theorem}
\begin{proof}
Using the isomorphism $ h:C^{k}(\partial \Omega)\rightarrow C^{k}(T) $ with $ h(u)=uo\varphi $, the theorem can be proven in the same way with the theorem 4.23.
\end{proof}
\subsubsection*{Analytic curves}
We will now extend the above results to more general curves.
\begin{defi}
We will say that a simple curve $ \gamma:[a,b]\rightarrow \mathbb{C} $ is analytic if there is an open set $ [a,b]\subseteq V\subseteq \mathbb{C} $ and a holomorphic and injective function $ F:V\rightarrow \mathbb{C} $ such that $ F|_{[a,b]}=\gamma|_{[a,b]} $.
\end{defi}
By analytic continuity,it is obvious that for an analytic simple curve and for specific $ V $ as above, there is unique $ F $ as above. For the following definitions and propositions $ \gamma $ will be an analytic curve and $ V, [a,b] $ as in the definitions above.
\\
Let $k=0,1,2,...$ or $\infty$. The set of continuous functions $f:\gamma^{*}\rightarrow
\mathbb{C}$ which are k times differentiable is denoted by
$C^k(\gamma^{*})$. The topology of the space is defined by the seminorms
$$\sup\limits_{t\in [a,b]}{|(fo\gamma)^{(l)}(t)|},l=0,1,2,...,k$$. In this way
$C^k(\gamma^{*})$, $k=0,1,2,...$ becomes Banach space and $C^\infty(\gamma^{*})$ Frechet space and
Baire's theorem is at our disposal.
\begin{defi}
For a curve $ \gamma $ as above, we define as $ K(\gamma,+) $ the open set $ F(\left\{z\in\mathbb{C}:Im(z)>0 \right\} \cap V) $ and as $ K(\gamma,-) $ the open set $ F(\left\{z\in\mathbb{C}:Im(z)<0 \right\} \cap V) $
\end{defi}
We can imagine the above two sets as two different sides of a neighbourhood of $\gamma$.
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_2(\gamma^{*},k)$ if there is no pair of a domain $ W $ with $ W \cap \gamma^{*} \neq \emptyset$ and a continuous function $\lambda:\gamma^{*}\cup \left(W\cap K(\gamma,+) \right) \rightarrow \mathbb{C}$
which is also holomorphic at
$W\cap K(\gamma,+)$ and satisfies $\lambda|_{W \cap \gamma^{*}}=u|_{W \cap \gamma^{*}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_3(\gamma^{*},k)$ if there is no pair of a domain $ W $ with $ W \cap\gamma^{*}\neq\emptyset $ and a continuous function $\lambda:\gamma^{*}\cup \left(W\cap K(\gamma,-) \right) \rightarrow \mathbb{C}$
which is also holomorphic at
$W\cap K(\gamma,-)$ and satisfies $\lambda|_{W \cap \gamma^{*}}=u|_{W \cap \gamma^{*}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_4(\gamma^{*},k)$ if there is neither pair of a domain $ W $ with $ W \cap \gamma^{*} \neq \emptyset$ and a continuous function $\lambda:\gamma^{*}\cup \left(W\cap K(\gamma,+) \right) \rightarrow \mathbb{C}$
which is also holomorphic at
$W\cap K(\gamma,+)$ and satisfies $\lambda|_{W \cap \gamma^{*}}=u|_{W \cap \gamma^{*}}.$ nor pair of a domain $ W $ with $ W \cap\gamma^{*}\neq\emptyset $ and a continuous function $\lambda:\gamma^{*}\cup \left(W\cap K(\gamma,-) \right) \rightarrow \mathbb{C}$
which is also holomorphic at
$W\cap K(\gamma,-)$ and satisfies $\lambda|_{W \cap \gamma^{*}}=u|_{W \cap \gamma^{*}}.$
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u\in C^k(\gamma^{*})$ belongs to
the class $U_5(\gamma^{*},k)$ if
there is no pair of disk $D(z_0,r),z_0\in \gamma^{*},r>0$ and holomorphic function
$f: D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap \gamma^{*}}=u|_{D(z_0,r)\cap \gamma^{*}}$
\end{defi}
\begin{remark}
We observe that the above definitions do not depend on the choice of $ V $ at definition 4.28.
\end{remark}
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(\gamma^{*},k)$,$U_3(\gamma^{*},k)$,
$U_4(\gamma^{*},k)$ and $U_5(\gamma^{*},k)$ are dense and $G_\delta$ subsets of $C^k(\gamma^{*})$.
\end{theorem}
\begin{proof}
The function $ f: C^k(\gamma^{*})\rightarrow C^k([a,b]) $ with $ f(u)=uo\gamma $ is a topological isomorphism. Using the function $ F $ of definition 4.28, as in theorem 4.23, we can prove that $ f\left(U_5(\gamma^{*},k)\right)=U_5([a,b],k) $. Thus, $U_5(\gamma^{*},k)$ is dense and $G_\delta$ subset of $C^k(\gamma^{*})$, because $U_5([a,b],k) $ is dense and $G_\delta$ subset of $C^k([a,b])$ according to the theorem 4.13. \\
In the same way we prove this result for the classes $U_2(\gamma^{*},k)$,$U_3(\gamma^{*},k)$,
$U_4(\gamma^{*},k)$.
\end{proof}
\subsubsection*{Locally analytic curve defined on open interval}
Now we extend the results to some curves which are defined on open intervals.
\begin{defi}
We will say that a simple curve $ \gamma:(a,b)\rightarrow \mathbb{C} $ is locally analytic if for every $ t_0 \in (a,b)$ there exists an open set $ t_0\in V\subseteq \mathbb{C} $ and a holomorphic and injective function $ F:V\rightarrow \mathbb{C} $ such that $ F|_{(a,b)}=\gamma|_{(a,b)} $.
\end{defi}
For the following definitions and propositions $ \gamma $ will be a locally analytic curve and $ V, (a,b) $ as in the above definition.
\\
Let $k=0,1,2,...$ or $\infty$. The set of continuous functions $f:\gamma^{*}\rightarrow
\mathbb{C}$ which are k times differentiable is denoted by
$C^k(\gamma^{*})$. The topology of the space is defined by the seminorms
$$\sup\limits_{t\in [a+\frac{1}{n},b-\frac{1}{n}]}{|(fo\gamma)^{(l)}(t)|},l=0,1,2,...,k, n=1,2,3, \dots$$. In this way
$C^k(\gamma^{*})$, $k=0,1,2,...$ becomes Banach space and $C^\infty(\gamma^{*})$ Frechet space and
Baire's theorem is at our disposal.
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_2(\gamma^{*},k)$ if there is no $ n\geq 1 $ such that $ u|_{\delta_n^{*}} \in U_2(\delta_n^{*},k) $, where $ \delta_n=\gamma|_{[a+\frac{1}{n},b-\frac{1}{n}]} $.
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_3(\gamma^{*},k)$ if there is no $ n\geq 1 $ such that $ u|_{\delta_n^{*}} \in U_3(\delta_n^{*},k) $, where $ \delta_n=\gamma|_{[a+\frac{1}{n},b-\frac{1}{n}]} $.
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u \in C^k(\mathbb {\gamma^{*}})$ belongs to the class
$U_4(\gamma^{*},k)$ if there is no $ n\geq 1 $ such that $ u|_{\delta_n^{*}} \in U_4(\delta_n^{*},k) $, where $ \delta_n=\gamma|_{[a+\frac{1}{n},b-\frac{1}{n}]} $.
\end{defi}
\begin{defi}
Let $k=0,1,2,...$ or $\infty$. A function $u\in C^k(\gamma^{*})$ belongs to
the class $U_5(\gamma^{*},k)$ if
there is no pair of disk $D(z_0,r),z_0\in \gamma^{*},r>0$ and holomorphic function
$f: D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap \gamma^{*}}=u|_{D(z_0,r)\cap \gamma^{*}}$
\end{defi}
\begin{theorem}
Let $k=0,1,2,...$ or $\infty$. The classes $U_2(\gamma^{*},k)$,$U_3(\gamma^{*},k)$,
$U_4(\gamma^{*},k)$ and $U_5(\gamma^{*},k)$ are dense and $G_\delta$ subsets of $C^k(\gamma^{*})$.
\end{theorem}
\begin{proof}
The function $ f: C^k(\gamma^{*})\rightarrow C^k(a,b)) $ with $ f(u)=uo\gamma $ is a topological isomorphism. We will prove that $ f\left(U_5(\gamma^{*},k)\right)=U_5((a,b),k) $. If $ u\in U_5(\gamma^{*},k) $ and $ f(u) \notin U_5((a,b),k) $,then there exists $ n $ such that $ f(u)|_{[a+\frac{1}{n},b-\frac{1}{n}]} \notin U_5([a+\frac{1}{n},b-\frac{1}{n}],k) $. Then,as we have seen at a previous proof, we have that $ u|_{\gamma([a+\frac{1}{n},b-\frac{1}{n}])} \notin U_5(\gamma([a+\frac{1}{n},b-\frac{1}{n}]),k) $ and therefore $ u\in U_5(\gamma^{*},k) $, which is a contradiction. So, $ f\left(U_5(\gamma^{*},k)\right)\subseteq U_5((a,b),k) $. Similarly, we prove that $ f\left(U_5(\gamma^{*},k)\right)\supseteq U_5((a,b),k) $ and therefore, $ f\left(U_5(\gamma^{*},k)\right)= U_5((a,b),k) $ Thus, $U_5(\gamma^{*},k)$ is dense and $G_\delta$ subset of $C^k(\gamma^{*})$, because $U_5((a,b),k)$ is dense and $G_\delta$ subset of $C^k((a,b))$ according to the respective result of the paragraph about the open interval. \\
In the same way we prove this result for the classes $U_2(\gamma^{*},k)$,$U_3(\gamma^{*},k)$,
$U_4(\gamma^{*},k)$.
\end{proof}
\begin{remark}
In the previous results nothing essential changes if we cosinder $ a=-\infty $ and $ b=-\infty $.
\end{remark}
\section{Real analyticity as a rare phenomenon}
Now, we will associate the phenomenon of non-extendability with that of real analyticity. Using results of non-extendability, we will prove results for real analyticity and we will see an application of the inverse.
\begin{prop}
Let $f:T \rightarrow \mathbb{C}$ and $t_0 \in \bbR$. Then
the following are equivalent:
\begin{enumerate}[(i)]
\item There exists a powerseries of real variable
$$\sum\limits_{n=0}^\infty a_n(t-t_0)^n,a_n \in \mathbb{C}$$ which
has strictly positive radius of convergence $r>0$
and there exists a $0<\delta\leq r$ such that $$f(e^{it})=
\sum\limits_{n=0}^\infty a_n(t-t_0)^n$$ for $t\in(t_0-\delta,t_0+
\delta)$.
\item There exists an open set
$V \subset \mathbb{C},e^{it_0}\in V$ and a holomorphic function
$F:V
\rightarrow
\mathbb{C}$ such that $F_{/V\cap T}=f_{/V\cap T}$.
\item There exists a powerseries of complex variable
$$\sum\limits_{n=0}^\infty b_n(z-e^{it_0})^n,b_n \in \mathbb{C}$$ w
which has strictly positive radius of
convergence $s>0$ and there exists $0<\epsilon\leq s$ such that
$$f(e^{it})=
\sum\limits_{n=0}^\infty b_n(e^{it}-e^{it_0})^n$$ for $t\in(t_0-
\epsilon,t_0+
\epsilon)$.
\end{enumerate}
\end{prop}
\begin{proof}
$(i)\Rightarrow(ii)$
We consider
$$g(z)=\sum\limits_{n=0}^\infty a_n(z-t_0)^n,$$
$z\in D(t_0,\delta)$,
which is well defined and holomorphic at $D(t_0,\delta)$.
We can assume that $\delta<\pi$, so that the function
$$\varphi:D(t_0,\delta)\rightarrow \mathbb{C},\varphi(z)=e^{iz}$$
is holomorphic and 1-1 and consequently there exists the inverse
function
$$\varphi^{-1}: \varphi(D(t_0,\delta))\rightarrow \mathbb{C}$$
which is also holomorphic. We define as $$F=go\varphi^{-1}:
\varphi(D(t_0,\delta))\rightarrow \mathbb{C},$$
which is holomorphic function and $$F(z)=f(z),z\in \varphi(D(t_0,
\delta))
\cap T.$$
$(ii)\Rightarrow(iii)$
Obvious \newline
$(iii)\Rightarrow(i)$
Let $$G(z)=\sum\limits_{n=0}^\infty b_n(z-e^{it_0})^n,z\in
D(e^{it_0},\epsilon).$$
The function $$\varphi: \mathbb{C}\rightarrow \mathbb{C},
\varphi(z)=e^{iz}$$ is holomorphic, and as a result the function
$$Go\varphi :\mathbb{C}\rightarrow \mathbb{C}$$ is holomorphic.
Hence, there exists $\delta>0$ such that
$D(t_0,\delta)\subset \varphi^{-1}(D(e^{it_0},\epsilon))$ and
$a_n\in \mathbb{C}$,$n=0,1,2,...$ for which
$$(Go\varphi)(z)=\sum\limits_{n=0}^\infty a_n(z-t_0)^n,
z\in D(t_0,\delta)$$ and therefore
$$f(e^{it})=G(e^{it})=\sum\limits_{n=0}^\infty a_n(t-t_0)^n,t
\in(t_0-\delta,t_0+\delta).$$
\end{proof}
\begin{defi}
A function $ f:T\rightarrow \mathbb{C} $ which satisfies the property (i) of Proposition 5.1
is called real analytic at $e^{it_0}$.
\end{defi}
\begin{defi}
A function $ f:T\rightarrow \mathbb{C} $ which satisfies the property (ii) of Proposition 5.1
is called extendable at $e^{it_0}$.
\end{defi}
Proposition 5.1 can be also written as following:
\begin{prop}
A function $ f:T\rightarrow \mathbb{C} $ is extendable at $e^{it_0}$ if and only if it is real analytic at $e^{it_0}$.
\end{prop}
Now, we want to generalize the previous Proposition to more general curves, in order to associate the phenomenon of non-extendability with that of real analyticity to these curves. We will use this connection in order to prove results which show that real analyticity is a rare phenomenon. To prove this connection, we need to use the Proposition 5.1 to these curves. However, we will prove that this lemma is true only for Jordan curves which are analytic.
\begin{lemma}
Let $ t_0 \in (0,1) $ and $ \gamma:[0,1]\rightarrow \mathbb{C} $ a Jordan curve for which the following holds:\\
For every $ I\subseteq \mathbb{R} $ open interval and for every function $ f: I \rightarrow \mathbb {C} $ where $ t_0 \in I $ the followings are equivalent:\\
1) There is a power series of real variable
$$\sum\limits_{n=0}^\infty a_n(t-t_0)^n,a_n \in \mathbb{C}$$
with a positive radius of convergence $r>0$ and there is $0<\delta\leq r$ so that $$f(t)=
\sum\limits_{n=0}^\infty a_n(t-t_0)^n$$ for $t\in(t_0-\delta,t_0+
\delta)$.\\
2) There is a power series of complex variable
$$\sum\limits_{n=0}^\infty b_n(z- \gamma(t))^n,b_n \in \mathbb{C}$$ with a positive radius of convergence $s>0$ and $0<\epsilon\leq s$ so that
$$f(t)=
\sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n$$ for $t\in(t_0-
\epsilon,t_0+
\epsilon)$.
\\
Then $ \gamma $ is locally analytic at $ t_0 $, meaning that there exists an
open set $ V $ in $ \mathbb{C} $ containing an interval $(t_0-\delta, t_0+\delta)\subseteq[0, 1]$ where $ \delta >0 $ and there exists a conformal mapping
$\varphi : V \rightarrow \mathbb {C}$ such that $\phi|_{(t_0-\delta, t_0+\delta)} = \gamma|_{(t_0-\delta, t_0+\delta)}$.
\end{lemma}
\begin{proof}
We will use the implication $ 2)\Rightarrow 1) $ only to prove that $ \gamma $ is differentiable at an open interval which contains $ t_0 $. There exists $ \beta >0 $ so that $I=(t_0-\beta, t_0+\beta)\subseteq[0, 1]$. For every $ t \in I $ we choose $ f(t)=\gamma(t)= \gamma(t_0)+(\gamma(t)-\gamma(t_0)) $ and so by the $ 2)\Rightarrow 1) $ we have that there exists $0< \delta < \beta$ so that $$\gamma(t)=
\sum\limits_{n=0}^\infty a_n(t-t_0)^n$$ for some constants $a_n \in \mathbb{C}$ for every $t\in(t_0-\delta,t_0+
\delta)$. Therefore $ \gamma $ is differentiable in this interval. Now, for $ g(t)=t=t_0+(t-t_0) $ by $ 1)\Rightarrow 2)$ we have that there exists $0<\epsilon\leq \delta$ so that
$$t= \sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n$$ for some constants $b_n \in \mathbb{C}$ for every $t\in(t_0- \epsilon,t_0+ \epsilon)$. We differentiate the above equation at $t=t_0 $ and we have that $ 1=b_{1}\gamma'(t_0) $. Therefore $b_{1}\neq 0$. Now, because $ \gamma $ is not constant, the power series $\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n$ has a positive radius of convergence and so there exists $\alpha>0$ such that $ \gamma(t)\in D(\gamma(t_0),\alpha)$ for every $ t \in (t_0- \epsilon,t_0+ \epsilon) $ and the function $ f:D(\gamma(t_0),\alpha)\rightarrow \mathbb{C}$ with $$ f(z)=\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n $$ is a holomorphic one. Also, we have that $ f(\gamma(t))=t $ for every $ t \in (t_0- \epsilon,t_0+ \epsilon) $. Because $ f'(\gamma(t_0))=b_1\neq 0 $ , $ f $ is locally invertible and let $ h=f^{-1} $ in an open disk $ D(t_0,\eta) $ where $ 0<\eta <\epsilon $. Then, $ \gamma(t)=h(t) $ for every $ t \in (t_0-\eta, t_0+\eta) $ and h is holomorphic and injection and the proof is complete.
\end{proof}
\begin{remark}
The above proof shows that if $\gamma $ in lemma 5.5 belongs to $ C^1([0,1]) $, then the conclusion of the lemma is true even if we suppose only that $ 1)\Rightarrow 2) $ is true. A simpler proof of the lemma which ,however, does not prove the version of the lemma described in the previous sentence,can also be made.
\end{remark}
Now, if we have a Jordan curve which satisfies the assumptions of the previous lemma at each of its point, then from this lemma we conclude that the curve is locally analytic at each of its points. From [9] we conclude that the curve is analytic Jordan curve. So, Proposition 5.1 cannot be generalized for Jordan curves which are not analytic. Below we show that for Jordan curves which are analytic the generalization is true.
\begin{lemma}
Let $f:I \rightarrow \mathbb{C}$ be a function, where $ I $ is an open interval and $t_0 \in I $ and $ \gamma:[0,2\pi]\rightarrow \mathbb{C}$ an analytic Jordan curve. We extend $ \gamma $ to a $ 2\pi $-periodic function. Then the followings are equivalent: \\
1) There exists a power series with real variable
$$\sum\limits_{n=0}^\infty a_n(t-t_0)^n,a_n \in \mathbb{C}$$ with positive radius of convergence $r>0$ and there exists a $0<\delta\leq r$ such that $$f(t)=
\sum\limits_{n=0}^\infty a_n(t-t_0)^n$$ for $t\in(t_0-\delta,t_0+
\delta)$.\\
2) There exists a power series with complex variable
$$\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n,b_n \in \mathbb{C}$$ with positive radius of convergence $s>0$ and there exists $\epsilon>0$ such that
$$f(t)=
\sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n$$ for $t\in(t_0-
\epsilon,t_0+
\epsilon)$.
\end{lemma}
\begin{proof}
Because $\gamma$ is an analytic Jordan curve, there is a holomorphic and injective function $ \Phi :D(0,r,R)\rightarrow\mathbb{C} $ where $ 0<r<1<R $ and $ \gamma(t)=\Phi(e^{it}) $ for every $ t \in \mathbb{R} $. Thefore there is an open disk $ D(t_0,\varepsilon)\subseteq\mathbb{C} $, where $ \varepsilon>0 $ and a holomorphic and injective function $ \Gamma:D(t_0,\varepsilon)\rightarrow \mathbb{C} $ with $\Gamma(z)=\Phi(e^{iz})$.\\
$(i)\Rightarrow(ii)$
We consider the function
$$g(z)=\sum\limits_{n=0}^\infty a_n(z-t_0)^n,$$
$z\in D(t_0,\delta)$,
which is well defined and holomorphic in $D(t_0,\delta)$. We have that
$$\Gamma^{-1}: \Gamma(D(t_0,\varepsilon))\rightarrow \mathbb{C}$$ is a holomorphic function. We consider the function $$F=go\Gamma^{-1}:
\Gamma(D(t_0,\varepsilon))\rightarrow \mathbb{C},$$ (where $\Gamma(D(t_0,\varepsilon))$ is an open set) which is a holomorphic function and $$(Fo\Gamma)(t)=f(t),t\in D(t_0,\varepsilon)
\cap I.$$ Therefore, there exist $b_n\in \mathbb{C}$,$n=1,2,3,...$ and $ \delta>0 $ such that
$$ F(z)=\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n $$\\
for every $z \in D(\gamma(t_0),\delta)\subseteq\Gamma(D(t_0,\varepsilon))$ and so $$ f(t)=(Fo\gamma)(t)=\sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n $$
in an interval $ (t_0- s,t_0+s)$ where $ s>0 $.
\\
$(ii)\Rightarrow(i)$
We cosinder the function $$G(z)=\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n,$$
$z\in D(\gamma(t_0),s)$. We choose $ a>0 $ with $ a<\varepsilon $ such that $ \Gamma(D(t_0,a))\subseteq D(\gamma(t_0),s)$.
The function
$$Go\Gamma :D(t_0,a)\rightarrow \mathbb{C}$$ is holomorphic.
Therefore, there exist
$a_n\in \mathbb{C}$,$n=1,2,3,...$ such that
$$(Go\Gamma)(z)=\sum\limits_{n=0}^\infty a_n(z-t_0)^n,
z\in D(t_0,a)$$ and consequently
$$f(t)=G(\gamma(t))=\sum\limits_{n=0}^\infty a_n(t-t_0)^n,t\in(t_0-
a,t_0+a).$$
\end{proof}
As we have seen before, if $ \delta:[0,2\pi]\rightarrow\mathbb {C} $ is an analytic Jordan curve, $ \Omega $ is the interior of $ \delta $ and $ \varphi:D\rightarrow \Omega $ is a Riemann function, then $ \varphi $ has a conformal extension in an open neighbourhood of $ \overline{D} $, $ W $, and $ \sup\limits_{|z|\leq1} |\varphi^{(l)}(z)|< +\infty$, $ \sup\limits_{|z|\leq1} |(\varphi^{-1})^{(l)}(z)|< +\infty$ for every $ l=0,1,2 \dots $. Let $ \gamma:[0,2\pi]\rightarrow \mathbb{C} $ be a Jordan analytic curve with $ \gamma(t)=\varphi(e^{it}) $ for $ t \in [0, 2\pi] $. We extend $ \gamma $ to a $ 2\pi $-periodic function. At the following, when we use the symbols $ \Omega, \gamma, \varphi, W $ , they will have the same meaning as mentioned above, unless something else is denoted.
\begin{defi}
A function $ f:\gamma^{*}\rightarrow \mathbb{C} $ is real analytic at $ \gamma(t_0) $, if there exists a power series $ \sum\limits_{n=0}^\infty a_n(t-t_0)^n $ with a radius of convergence $ \delta>0 $ such that $ f(\gamma(t))=\sum\limits_{n=0}^\infty a_n(t-t_0)^n $ for every $ t \in(t_0-\delta,t_0+\delta) $.
\end{defi}
\begin{defi}
We will say that a function $ f:\gamma^{*}\rightarrow \mathbb{C} $ is extendable at $ \gamma(t_0) $ if there exists an open set $ V $ with $ \gamma(t_0)\in V $ and a holomorphic function $ F:V\rightarrow \mathbb {C} $ such that $ F|_{V\cap \gamma^{*}}=f|_{V\cap \gamma^{*}} $. If a function $ f \in C^{\infty}(\gamma^{*}) $ is not extendable at any $ \gamma(t_0) $, then it belongs to $ U_5(\gamma^{*}, \infty) $ .
\end{defi}
The next proposition is , in fact, equivalent to the lemma 5.7.
\begin{prop}
A function $ f:\gamma^{*}\rightarrow \mathbb{C} $ is real analytic at $ \gamma(t_0) $ if and only if $ f $ is extendable at $ \gamma(t_0) $.
\end{prop}
\begin{proof}
$ \Rightarrow: $ If $ f $ is real analytic at $ \gamma(t_0) $, then from lemma 5.7
$$f(\gamma(t))=
\sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n$$ for every $t\in(t_0-
\epsilon,t_0+
\epsilon)$, for some $ b_n \in \mathbb {C}, \epsilon>0 $ and therefore,
$$F(z)=
\sum\limits_{n=0}^\infty b_n(z-\gamma(t_0))^n$$ for $t\in D(z_0,\epsilon)$ is an extension of $ f $ at $ \gamma(t_0) $\\
$ \Leftarrow: $ If $ f $ is extendable at $ \gamma(t_0) $, where $ f $ is the extension, then
$$f(\gamma(t))=F(\gamma(t))=\sum\limits_{n=0}^\infty b_n(\gamma(t)-\gamma(t_0))^n$$
for every $z\in(t_0-\epsilon,t_0+\epsilon)$, for some $ b_n \in \mathbb {C}, \epsilon>0 $ and as a result, again from lemma 5.7 we conclude that $ f $ is real analytic at $ \gamma(t_0) $.
\end{proof}
The previous Proposition and the theorem 4.23, immediately prove the following theorem
\begin{theorem}
For $ k=0,1,2, \dots, \infty $ the set of functions $ f \in C^{k}(\gamma^{*}) $ which are nowhere real analytic is dense $ G_{\delta} $ subset of $ C^{k}(\gamma^{*}) $.
\end{theorem}
The following more general result was suggested by J.-P. Kahane.
\\
Let $ z_n, n=0,1,2, \cdots $ be a dense sequence of $ T $ and let $ M=(M_n), n=0,1,2, \cdots $ be a sequence of real numbers. For $ k,l \in \mathbb{N} $ we denote the set
$$ F (M,k,z_l ) = \left\{ f \in C^{\infty}(T) : \left| \frac{d^{n}(f)}{d\vartheta^{n}} (z_l) \right| \leq M_n k^n \forall n=0,1,2, \dots \right\} $$
\begin{lemma}
$ F (M,k,z_l ) $ is closed and has an empty interior in $C^{\infty}(T)$.
\end{lemma}
\begin{proof}
It is obvious that this set is closed. Let suppose that there is $ f\in \left(F (M,k,z_l )\right)^{o}$. Then, there exist $ m\in \mathbb{N} $ and $ \varepsilon>0 $ such that
$$V=\left\{ u \in C^{\infty}(T) : \sup\limits_{\vartheta\in \mathbb{R}}\left| \frac{d^{n}(u)}{d\vartheta^{n}}(e^{i\vartheta})-\frac{d^{n}(f)}{d\vartheta^{n}} (e^{i\vartheta}) \right|<\varepsilon \right\}\subseteq F (M,k,z_l ) $$ for all $ n=0,1,2, \cdots , m$.
\\
Let $$ A>k^{m+1} M_{m+1}+ \sup\limits_{\vartheta\in \mathbb{R}} \left| \frac{d^{m+1}(f)}{d\vartheta^{m+1}}(e^{i\vartheta}) \right|$$
with $ A>0 $ and $ b \in \mathbb{N} $ with $ \frac{A}{b}<\varepsilon $. Then, let $ a $ be the trigonometric polynomial $ a(e^{i\vartheta})=\frac{A}{b^{m+1}}e^{ib\vartheta} $. We have that
$$ \sup\limits_{\vartheta\in \mathbb{R}} \left| \frac{d^{n}(a)}{d\vartheta^{n}}(e^{i\vartheta}) \right|<\varepsilon $$ for $ n=0,1,2, \cdots, m $ and
$$ \left| \frac{d^{n}(a)}{d\vartheta^{m+1}}(e^{i\vartheta}) \right|=A $$
Therefore, $ a+f \in V\subseteq F (M,k,z_l ) $ and so we have that $$A - \left| \frac{d^{m+1}(a)}{d\vartheta^{m+1}} (z_l) \right| \leq M_{m+1} k^{m+1} $$
which is a contradiction.
\\
As result of this Lemma we have from Baire's theorem that the set
$ \bigcap\limits_{l=1}^{\infty}\bigcap\limits_{k=1}^{\infty}(C^{\infty}(T)\backslash F (M,k,z_l ) )$ is a dense and $ G_\delta$ subset of $C^{\infty}(T)$. For $ M_n=n! $ if $ f\in\bigcap\limits_{l=1}^{\infty}\bigcap\limits_{k=1}^{\infty}(C^{\infty}(T)\backslash F (M,k,z_l ) ) $
then we easily can see that $f$ is nowhere real analytic at $ T $ and therefore according to Proposition 5.10 we have that $ U_5(T) $ contains a dense and $ G_\delta $ subset of $C^{\infty}(T)$.
\end{proof}
\begin{defi}
Let $ f:\Omega\rightarrow\mathbb{C}$ be a holomorphic function. We say that $ f $ belongs to the set $ A^{\infty}(\Omega) $ if for every $ l=0,1,2,3 \dots $, $ f^{(l)} $ can be extended to a continuous function in $ \overline{\Omega} $. The topology of $A^{\infty}(\Omega)$ is defined by the seminorms
$$ \sup\limits_{z\in\overline{\Omega}} |f^{(l)}(z)| , l=0,1,2,3\dots $$
It can easily be proven that for a function $ g=fo\varphi $ we have that $ f\in A^{\infty}(\Omega) $ if and only if $ g \in A^{\infty}(D) $ and that the function $ h:A^{\infty}(\Omega)\rightarrow A^{\infty}(D) $ with $ h(f)=fo\varphi=g $ is a linear one and a topological homeomorphism.
\end{defi}
\begin{defi}
We say that an harmonic function $ f:\Omega\rightarrow \mathbb {C} $ belongs to the set $ H_{\infty}(\Omega) $ if $ L_{k,l}(f) $ can be extended to a continuous function in $ \overline{\Omega} $ for every $ k,l=0,1,2 \dots $, where $ L_{k,l}=\frac{\partial^k}{\partial z^k} \frac{\partial^l}{\partial \overline{z}^l}$. We observe that if $ k\geq 1 $ and $ l\geq 1 $ then $ L_{k,l}(f)\equiv0 $, because $ f $ is harmonic function. The topology of $ H_{\infty}(\Omega) $ is defined by the seminorms
$$ \sup\limits_{z\in\overline{\Omega}} \left| \frac{\partial^k f(z)}{\partial z^k} \right|, \sup\limits_{z\in\overline{\Omega}} \left|\frac{\partial^l f(z)}{\partial \overline{z}^l}\right|, k,l\geq 0.$$
As in the previous definition, we can see that for a function $ g=fo\varphi $ we have that $ f \in H_{\infty}(\Omega) $ if and only if $ g \in H_{\infty}(D) $ and that the function $ h:H_{\infty}(\Omega)\rightarrow H_{\infty}(D) $ with $ h(f)=fo\varphi=g $ is isomorphism.
\end{defi}
At this point, we use the conformal function $ \varphi $ in order to extend a previous result. Specifically, we easily can conclude that $ C^{\infty}(\partial \Omega)=A^{\infty}(\Omega)\oplus A^{\infty}_0(\Omega)=H_{\infty}(\Omega) $, where $A^{\infty}(\Omega)=\{f\in A^{\infty}(\Omega): f(\varphi(0))=0\}$.
Now, we will consider the case of a domain $ \Omega $ which is bounded by a finite number of pairwise disjoint Jordan analytic curves $ \gamma_1,\gamma_2,\dots ,\gamma_m $. For a function $ u: \partial \Omega\rightarrow \mathbb {C} $ we denote that $ u\in C^{\infty}(\partial \Omega) $ if $ u|_{\gamma^{*}_{j}} \in C^{\infty}(\gamma^{*}_{j}) $ for every $ j=1,2,3 \dots, m $. Let $ \phi_j:D\rightarrow V_j $ a Riemann function, where $V_j$ is the interior of $ \gamma_j $ for every $ j=1,2,3 \dots, m $. The topology of $C^{\infty}(\partial \Omega)$ is defined by the seminorms
$$ \max\limits_{j=1,2, \dots , m}\sup\limits_{\vartheta\in \mathbb{R}} \left| \frac{d^l (u(\gamma_jo\varphi_j))}{d\vartheta^l}(e^{i\vartheta})\right|, l=0,1,2, \dots$$
At the following the symbols $ \Omega, \gamma_j, \phi_j, V_j$ will have the same notion as above, unless something else is denoted.
\\
\begin{defi}
We say that a holomorphic function belongs to the class $ A^{\infty}(\Omega) $ if for every $ l=0,1,2 \dots $, $ f^{(l)} $ is extended to a continuous function in $\overline{\Omega} $. The topology of $ A^{\infty}(\Omega) $ is defined by the seminorms $ \sup\limits_{z\in\Omega} \left| f^{(l)}(z)\right|, l=0,1,2,3 \dots$
\end{defi}
According to a well-known result ([3]) every holomorphic function $ f $ in $ \Omega $ can be uniquely written as $ f=f_1+f_2+\dots+f_{m-1} $ where $ f_j $ is holomorphic in $ W_j^{c} $ for $ j=0,1,2, \dots, m-1 $ and $ \lim\limits_{z\rightarrow\infty}f_k(z)=0 $ for $ k=1,2, \dots, m-1 $, where $ W_j, j=0,1,2,\dots,m-1 $ are the the connected components of $ \left(\overline{\Omega}\right)^c $ and $ W_0 $ is the non-bounded connected component of $ \left(\overline{\Omega}\right)^c $. It is obvious that if $ f\in A^{\infty}(\Omega) $ then $ f_j\in A^{\infty}(W_j^c) $ and as a result we have that $ A^{\infty}(\Omega)=A^{\infty}(W_0^c)\oplus A_0^{\infty}(W_1^c)\oplus\dots\oplus A_0^{\infty}(W_{m-1}^c) $, where $A_0^{\infty}(W_k^c)=\{f\in A^{\infty}(W_k^c):\lim\limits_{z\rightarrow\infty}f(z)=0\}$ for $ k=1,2,\dots,m-1 $.
\\
At this point, we want to examine if it is true that $ C^{\infty}(\partial\Omega)=A^{\infty}(\Omega)_{|\partial\Omega}\oplus \overline{X} $ for a subspace $ X $ of $ A^{\infty}(\Omega)_{|\partial\Omega} $ in order to extend the corresponding result for $ \Omega=D $. Unfortunately, we can prove that this is false, when $ m\geq 2 $, in other words, when $ \Omega $ is not simply connected.
\\
Indeed, let $ z_0 $ be a point in a bounded connected component of $ \left(\overline{\Omega}\right)^c $. Also, let $ u:\mathbb {C}\setminus \{z_0\}\rightarrow \mathbb {R} $ be a function with $ u(z)=ln|z-z_0| $ for $ z\in \mathbb {C}\setminus \{z_0\} $. This function is an harmonic one and $ u|_{\partial\Omega} \in C^{\infty}(\partial\Omega) $. If there exist $ f,g \in A^{\infty}(\Omega) $ such that $ u|_{\partial\Omega}=f|_{\partial\Omega}+\overline{g}|_{\partial\Omega} $, then we have that
$$ u|_{\partial\Omega}=Re(u|_{\partial\Omega})=Re\left(f|_{\partial\Omega}+\overline{g}|_{\partial\Omega} \right)=Re (f)|_{\partial\Omega}+Re(g) |_{\partial\Omega}=Re(f+g)|_{\partial\Omega}$$
So,we have that $ u, Re(f+g)$ are harmonic functions in $ \Omega $, continuous functions in $ \overline{\Omega} $ and $ u|_{\partial\Omega}= Re(f+g)|_{\partial\Omega} $. Therefore, from a known theorem we have that $ u=Re(f+g) $ in $ \Omega $. The function $ \frac{z-z_0}{e^{(f+g)(z)}} $ is holomorphic in $ \Omega $ and $$ \left|\frac{z-z_0}{e^{(f+g)(z)}}\right|=\frac{|z-z_0|}{e^{\left(Re(f+g)\right)(z)}}=\frac{|z-z_0|}{e^{u(z)}}=\frac{|z-z_0|}{|z-z_0|}=1 $$ for every $ z\in \Omega $ and as a result, because $\Omega$ is a domain, we have that $ \frac{z-z_0}{e^{(f+g)(z)}} $ is constant in $ \Omega $ and so there exists $ c\in\mathbb {C} $ with $ |c|=1 $ and $ ce^{(f+g)(z)}=z-z_0 $ for every $ z\in \Omega $. Therefore, there exists a holomorphic logarithm of $ z-z_0 $ in $ \Omega $, while there is a Jordan curve in $ \Omega $ which includes $ z_0 $ in its interior, which it is well-known that this is a contradiction.
\\
\begin{defi}
We will say that a function $ f\in A^{\infty}(\Omega) $ is extendable at $ \gamma_j(t_0) $ for some $ j\in \{1,2 \dots, m\} $ if there exists $ \delta>0 $ and a holomorphic function $ F:D(\gamma_j(t_0),\delta)\rightarrow \mathbb {C} $ such that $ F|_{D(\gamma(t_0),\delta)\cap \overline{\Omega}}=f|_{D(\gamma(t_0),\delta)\cap \overline{\Omega}} $.
\end{defi}
At this point, we will need the following lemma:
\begin{prop}
A function $ f\in A^{\infty}(\Omega) $ is extendable at $ \gamma_j(t_0) $ if and only if $ f|{\gamma^{*}_j} $ is real analytic at $ \gamma_j(t_0) $.
\end{prop}
\begin{proof}
If $ f $ is extendable at $ \gamma_j(t_0) $ then $ f|_{\gamma^{*}_j} $ is real analytic at $ \gamma_j(t_0) $ from Proposition 5.10.\\
Conversely, if $ f|_{\gamma^{*}_j} $ is real analytic at $ \gamma_j(t_0) $ , then from Proposition 5.10 there exists $ \delta>0 $ and a holomorphic function $ g:D(\gamma_j(t_0),\delta)\rightarrow \mathbb {C} $ such that $ f|_{\gamma^{*}_j \cap D(\gamma_j(t_0),\delta)}= g|_{\gamma^{*}_j \cap D(\gamma_j(t_0),\delta)}$. Let $ F:D(\gamma_j(t_0),\delta)\rightarrow \mathbb {C} $ be a function with
$$ F(z) = \left\{
\begin{array}{ c l }
f(z), & z\in D(\gamma_j(t_0),\delta) \cap \overline{\Omega} \\
g(z), & z\in D(\gamma_j(t_0),\delta) \cap (\mathbb {C}\setminus \Omega)
\end{array}
\right. $$
The function is well-defined since $ f|{\gamma^{*}_j \cap D(\gamma_j(t_0),\delta)}= g|{\gamma^{*}_j \cap D(\gamma_j(t_0),\delta)}$, is holomorphic in $D(\gamma_j(t_0),\delta)\setminus \gamma^{*}_j$ and continuous in $D(\gamma_j(t_0),\delta)$. So, according to Proposition 2.1, we have that $ f $ is holomorphic in $D(\gamma_j(t_0),\delta)$ and so $ F $ is an extension at $ \gamma_j(t_0) $.
\end{proof}
With the previous Proposition, we can prove the following theorem.
\begin{theorem}
The set of functions $ f\in A^{\infty}( \Omega )$ such that for every $ j\in \{1,2 \dots, m\} $, $f|{\gamma^{*}_j} $ is nowhere real analytic, is a dense and $ G_{\delta} $ subset of $ A^{\infty} (\Omega) $.
\end{theorem}
\begin{proof}
Because of the Proposition 5.17, the set of functions $ f\in A^{\infty} (\Omega) $ such that for every $ j\in \{1,2 \dots, m\}, f|{\gamma^{*}_j} $ is nowhere real analytic, is the same with the set of functions $ f\in A^{\infty}( \Omega )$ such that $ f $ is nowhere extendable in $ \partial\Omega $. However, the last set is a dense and $ G_{\delta} $ subset of $ A^{\infty}( \Omega) $ according to [8] and [10]. Therefore, the same is true for the first set.
\end{proof}
\begin{remark}
According to [6] if $ \gamma $ is a analytic Jordan curve, then the same is true for the parametrization of $ \gamma $ by the arc length. Therefore, the Theorem 5.18 is also valid for this parametrization of $ \gamma $.
\end{remark}
\section{The complex method and non-extendability}
At this paragraph we will use a complex method to prove that the results of Paragraph 3.
\\
Firstly, we will consider the segment $ [0,1] $ and we will prove that for every $ k=1,2, \dots $ or $k=\infty $ the set of functions of $C^{k}[0,1]$ which are not extended to a holomorphic function at a disk with center at any point of $ [0,1] $ is a dense $ G_\delta $ subset of the respective $ C^{k}[0,1] $. The same is true for the continuous functions or for semidisks instead of disks .\\
For the proof, we will need the following lemma.
\begin{lemma}
Let $ D(z_0,r)$ be a disk with center $ t_0\in (0,1) $ and radius $ r>0 $ and let $ +\infty > M > 0 $. The set of continuous functions $ f:[0,1]\rightarrow \mathbb{C} $, for which there exists a holomorphic function $ F $ at $ D(t_0,r)$ where $ F $ is bounded by $ M $ and $ F|_{D(t_0,r)\cap [0,1]}=f|_{D(t_0,r)\cap [0,1]} $,is a closed subset of $ C\left([0,1]\right) $ and it has empty interior.
\end{lemma}
\begin{proof}
Let $ A $ be the set of continuous function$ f:[0,1]\rightarrow \mathbb{C} $ which are extended to a holomorphic function $ F $ at $ D(t_0,r)$ where $ F $ is bounded by $ M $and $ F|_{D(t_0,r)\cap [0,1]}=f|_{D(t_0,r)\cap [0,1]} $. If this set has not empty interior, then there is a function $ f $ in the interior of A and $ \delta>0 $ such that $$\left\{g \in C\left([0,1]\right): \sup \limits_{t \in [0,1]}\left| f(t)-g(t) \right| < \delta \right\}\subseteq A$$ We choose $ z_0 \in D(t_0,r)\backslash [0,1] $ and $ a>0 $ with $ a<\delta \inf \limits_{t\in [0,1]}|t-z_0| $. The function $ h(t)=f(t)+ \dfrac{a}{2(t-z_0)} $ for $ t\in [0,1] $ belongs to $ A $ and therefore has a holomorphic and bounded extension $ G $ at $ D(t_0,r)$ with $ G|_{D(t_0,r)\cap [0,1]}=h|_{D(t_0,r)\cap [0,1]} $. However, $ G(z)=F(z)+ \dfrac{a}{2(z-z_0)} $ for $ z\in D(t_0,r)\backslash \{z_0\}$ by analytic continuity since they are equal at $ [0,1] $. As a result $ G $ is not bounded at $ D(t_0,r)$ which is a contradiction. So, $ A $ has empty interior.\\
Let $ (f_n)_{n\geq 1} $ be a sequence in $ A$ where $ f_n $ uniformly converges at $ [0,1] $ to a function $ f $. Then, for $ n=1,2, \dots $ there are holomorphic and bounded functions by $ M $ $ F_n:D(t_0,r)\rightarrow\mathbb{C} $ with $ F_n|_{D(t_0,r)\cap [0,1]}=f_n|_{D(t_0,r)\cap [0,1]} $. By Montel's theorem, there is a subsequence of $ (F_n) $, $ (F_{k_n}) $ which converges uniformly at the compact subsets of $ D(t_0,r) $ to a function $ F $ which is holomorphic and bounded by $ M $at $ D(t_0,r) $. Because $ F_{k_n} \rightarrow f$ at $ D(t_0,r)\cap [0,1] $ we have that $ F|_{D(t_0,r)\cap [0,1]}=f|_{D(t_0,r)\cap [0,1]} $ and so $ f \in A $. Therefore, A is a closed set.
\end{proof}
The above lemma immediately proves the following theorem.
\begin{theorem}
The class $ U_5([0,1],0) $ is a dense $ G_\delta $ subset of $ C[0,1] $.
\end{theorem}
\begin{proof}
We denote $ A(M,t_0,r) $ the set $ A $ of the above lemma. Then, if we consider a dense sequence $ z_l $ of $ (0,1) $ , then the set $ \bigcap \limits^{\infty}_{l=1}\bigcap \limits^{\infty}_{n=1}\bigcap \limits^{\infty}_{M=1} \left( C[0,1] \backslash A(M,t_0,r) \right) $ is a dense $ G_\delta $ subset of $ C[0,1] $ due to Baire's Theorem and coincides with $ U_5([0,1],0) $.
\end{proof}
\begin{remark}
From the proof of the theorem we can see that we have the same result for the classes $ U_5([0,1],k) $ for $ k=1,2 \dots, \infty $.
\end{remark}
\begin{theorem}
Let $ L $ be a closed set without isolated points. Then the class of continouos functions at $ L $ for which there is no pair of disk $D(z_0,r),z_0\in L,r>0$ and holomorphic function
$f: D(z_0,r)\rightarrow \mathbb{C}$ such that
$f|_{D(z_0,r)\cap L}=u|_{D(z_0,r)\cap L}$ is dense $ G_\delta $ subset of $ C(L) $.
\end{theorem}
\begin{proof}
The proof is the same with theorem's 6.2 proof.
\end{proof}
Now we will prove the result of the first theorem of this paragraph for the case of a semi-disk. Now, we need the following lemma.
\begin{lemma}
Let $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$ be a semi-disk with $ t_0\in (0,1) $ and radius $ 0<r\leq min(t_0,1-t_0) $ and let $ M>0 $. The set of continuous functions $ f:[0,1]\rightarrow \mathbb{C} $, for which there exists a continuous function $ F $ at $ \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $ where $ F $ is bounded by $ M $, $ F $ is holomorphic at $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$ and $ F|_{D(t_0,r)\cap [0,1]}=f|_{D(t_0,r)\cap [0,1]} $,is a closed subset of $ C\left([0,1]\right) $ and it has empty interior.
\end{lemma}
\begin{proof}
Let $ B $ be the set of continuous functions $ f:[0,1]\rightarrow \mathbb{C} $ for which there exists a continuous function $ F $ at $ \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $ where $ F $ is bounded by $ M $, $ F $ is holomorphic at $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$ and $ F|_{D(t_0,r)\cap [0,1]}=f|_{D(t_0,r)\cap [0,1]} $. If this set has not empty interior, then there is a function $ f $ in the interior of $ B$ and $ \delta>0 $ such that $$\left\{g \in C\left([0,1]\right): \sup \limits_{t \in [0,1]}\left| f(t)-g(t) \right| < \delta \right\}\subseteq B$$ We choose $ z_0 \in D(t_0,r)\backslash [0,1] $ with $ Im(z_0)>0 $ and $ a>0 $ with $ a<\delta \inf \limits_{t\in [0,1]}|t-z_0| $. The function $ h(t)=f(t)+ \dfrac{a}{2(t-z_0)} $ for $ t\in [0,1] $ belongs to $ B $ and therefore has a continuous and bounded extension $ G $ at $ \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $ with $ G|_{D(t_0,r)\cap [0,1]}=h|_{D(t_0,r)\cap [0,1]} $ which is holomorphic at $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$. We easily can see that $ G(z)=F(z)+ \dfrac{a}{2(z-z_0)} $ for $ z\in D(t_0,r)\backslash \{z_0\}$ . Indeed, by Schwarz Reflection Principle there exists a holomorphic function $ H:D(t_0,r)\backslash \{z_0, -z_0\}$ with $ H(z)=G(z)-F(z)- \dfrac{a}{2(z-z_0)} $ at $ \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $. Therefore,by analytic continuity, because $ H=0 $ at $ D(t_0,r)\cap [0,1] $, $ G(z)-F(z)- \dfrac{a}{2(z-z_0)} =0$ at $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$. As a result $ G $ is not bounded at $ D(t_0,r)$ which is a contradiction. So, $ B $ has empty interior.\\
Now, we will prove that $ B $ is closed. Let $ (f_n)_{n\geq 1} $ be a sequence in $ B$ where $ f_n $ uniformly converges at $ [0,1] $ to a function $ f $. Then, for $ n=1,2, \dots $ there exist continuous functions $ F_n $ at $ \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $ where $ F_n $ are bounded by $ M $ and $ F_n $ are holomorphic at $ D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}$ with $ F_n|_{D(t_0,r)\cap [0,1]}=f_n|_{D(t_0,r)\cap [0,1]} $. There exists a bijective and holomorphic function $ \Psi :I\cup D \rightarrow \left(D(t_0,r)\cap \{z \in \mathbb{C}:Im(z)>0\}\right) \cup [t_0-r,t_0-r] $, where $ I= \left\{ e^{it}: 0\leq t\leq 1 \right\} $. Then, if $ G_n=F_no\Psi $ ,$ g_n=f_no\Psi $ and $ g=fo\Psi $ for $ n=0,1,2, \dots $, we have that $ g_n $ uniformly converges at $ I $ to $ g $. Also, $ g_n, g, G_n $ are continuous functions and $ G_n $ are holomorphic at $ D $ and bounded by $ M $. By Montel's theorem, there is a subsequence of $ (G_n) $, $ (G_{k_n}) $ which converges uniformly at the compact subsets of $ D $ to a function $ G $ which is holomorphic and bounded by $ M $ at $ D $. We can suppose without loss of generality that $ (G_n)=(G_{k_n}) $, because otherwise we can consider the sequence $ (W_n)=(G_{k_n}) $ and follow the same proof for this function.
Now, it is sufficient to prove that for any circular sector which has boundary $ [0,e^{ia}]\cup [0,e^{ib}] \cup \left\{ e^{it}: a\leq t\leq b \right\}$ with $ 0<a<b<1 $, $ (G_n) $ converges uniformly at this sector, because then we will have that the limit of $ (G_n) $ , which is $ g $ at the arc and $ G $ at the other part of the circular sector, will be a continuous function. So, let $ K $ be a closed circular sector which has boundary $ [0,e^{ia}]\cup [0,e^{ib}] \cup \left\{ e^{it}: a\leq t\leq b \right\}$ with $ a\leq t\leq b $. We will prove that $ (G_n) $ is uniformly Cauchy at $ K $.
From [6], we know that for every $ n $, the radial limits of $ G_n $ exist almost everywhere and so we can consider the respective functions $ g_n $ on the whole circle which are extensions of the previous $ g_n $. These $ g_n $ are also bounded by $ M $.\\
Let $ \varepsilon>0 $ be a positive number. For the Poisson kernel $ P_r$ and for every $ n=0,1,2 \dots $ we have that $G_n(re^{i\theta})=\dfrac{1}{2\pi}\int\limits_{-\pi}^{\pi}P_r(t)g_n(\theta-t)dt $. We choose $ 0<\delta < min\{1-b,a\} $. There exists $ 0<r_0<1 $ such that $ \sup\limits_{\delta\leq |t|\leq \pi} P_r(t) < \frac{\varepsilon}{8M} $ for every $ r\in [r_0,1) $. Then, at $ K\cap \{z \in \mathbb {C}: |z|\leq r_0\} , (G_n) $ is uniformly Cauchy and thus there exists $ n_1 $ such that for every $ n,m\geq n_1 $,
$$ \sup\limits_{z \in K\cap \{z \in \mathbb {C}: |z|\leq r_0\} }|G_n(z)-G_m(z)|<\dfrac{\varepsilon}{2}$$
In addition ,because $ g_n $ converges uniformly to $ g $ at $ I $ there exists $ n_2 $ such that for every $ n,m\geq n_2 $, $ \sup\limits_{z \in I} |g_n(z)-g_m(z)|<\dfrac{\varepsilon}{4}$.
So, for $ n,m\geq \max\{n_1,n_2\} $ , for $ \theta \in [a,b] $ and for $ 1>r>r_0 $
$$ |G_n(re^{i\theta})-G_m(re^{i\theta})|\leq \dfrac{1}{2\pi}\int\limits_{-\pi}^{\pi}P_r(t)|g_n(\theta-t)-g_m(\theta-t)|dt=$$
$$\dfrac{1}{2\pi}\int\limits_{-\delta}^{\delta}P_r(t)|g_n(\theta-t)-g_m(\theta-t)|dt+\dfrac{1}{2\pi}\int\limits_{\delta\leq |t|\leq \pi}P_r(t)|g_n(\theta-t)-g_m(\theta-t)|dt\leq $$
$$\dfrac{1}{2\pi}\int\limits_{-\delta}^{\delta}P_r(t)\sup\limits_{z \in I} |g_n(z)-g_m(z)|dt+\dfrac{\sup\limits_{\delta\leq |t|\leq \pi} P_r(t)}{2\pi}\int\limits_{\delta\leq |t|\leq \pi}2Mdt\leq $$
$$\dfrac{\varepsilon}{4}\dfrac{1}{2\pi}\int\limits_{-\delta}^{\delta}P_r(t)dt+\dfrac{\varepsilon}{16M\pi}\int\limits_{\delta\leq |t|\leq \pi}2Mdt\leq $$
$$\dfrac{\varepsilon}{4}\dfrac{1}{2\pi}\int\limits_{-\pi}^{\pi}P_r(t)dt+\dfrac{\varepsilon}{4}=
\dfrac{\varepsilon}{2} $$
The inequality also holds for $ r=1 $. Therefore $ (G_n) $ is uniformly Cauchy and thus $ B $ is a closed subset of $ C([0,1]) $
\end{proof}
\begin{theorem}
The class $ U_2([0,1],0) $ is a dense $ G_\delta $ subset of $ C[0,1] $.
\end{theorem}
\begin{proof}
We denote $ B(M,t_0,r) $ the set $ B $ of the above lemma. Then, if we consider a dense sequence $ z_l $ of $ (0,1) $ , then the set $ \bigcap \limits^{\infty}_{l=1}\bigcap \limits^{\infty}_{n=1}\bigcap \limits^{\infty}_{M=1} \left( C[0,1] \backslash B(M,t_0,r) \right) $ is a dense $ G_\delta $ subset of $ C[0,1] $ due to Baire's Theorem and coincides with $ U_2([0,1],0) $.
\end{proof}
\begin{remark}
From the proof of the theorem we can see that we have the same result for the classes $ U_2([0,1],k) $ for $ k=1,2 \dots, \infty $. Furthermore, the same is true for the classes $ U_3([0,1],k) $,$ U_4([0,1],k) $,$ U_5([0,1],k) $ for $ k=0,1,2 \dots, \infty $
\end{remark}
\noindent \textbf{Acknowledgement}: We would like to thank P. Gauthier,
J.-P. Kahane, G. Koumoullis, M. Maestre, S. Mercourakis, M. Papadimitrakis and
A. Siskakis for helpful communications.\\
\begin{center}
\textbf{REFERENCES}
\end{center}
\noindent
[1]: L. Ahlfors, Complex Analysis, An Introduction to the Theory of Analytic
Functions of One Complex Variable, Third Edition, McGraw-Hill, Book Company
\noindent
[2]: F. S. CATER, Differentiable, Nowhere Analytic functions, Amer. Math. Monthly
91 (1984) no. 10, 618-624.
\noindent
[3]: G. Costakis, V. Nestoridis and I. Papadoperakis, Universal Laurent series,
Proc. Edinb. Math. Soc. (2) 48 (2005), no. 3, 571–583
\noindent
[4]: A. Daghighi and S. Krantz, A note on a conjecture concerning boundary
uniqueness, arxiv 1407.1763v2 [math.CV], 12 Aug. 2015
\noindent
[5]: J. Garnett, Analytic Capacity and Measure, Lecture Notes in Mathematics, vol.
277, Springer-Verlag Berlin, Heidelberg, New York, 1972
\noindent
[6]: K. Hoffman, Banach Spaces of Analytic Functions, 1962 Prentice Hall Inc.,
Englewood Cliffs, N. J.
\noindent
[7]: S. Krantz and H. Parks, A primer of Real Analytic Functions, Second edition,
2002 Birkhauser Boston
\noindent
[8]: V. Nestoridis, Non extendable holomorphic functions, Math. Proc. Cambridge
Philos. Soc. 139 (2005), no. 2, 351–360.
\noindent
[9]: V. Nestoridis and A. Papadopoulos, Arc length as a conformal parameter for
locally analytic curves, arxiv: 1508.07694
\noindent
[10]: Nestoridis, V., Zadik, I., Pade approximants, density of rational functions in
$A^\infty(\Omega)$ and smoothness of the integration operator. (arxiv: 1212.4394.) J.
M. A. A. 423 (2015), no.2, 1514-1539
\noindent
[11]: W. Rudin, Function theory in polydiscs, W. A. Benjamin 1969
\\
\noindent
University of Athens\\
Department of Mathematics\\
157 84 Panepistemiopolis\\
Athens\\
Greece\\
\noindent
E-mail addresses:\\
[email protected] (E. Bolkas)\\
[email protected] (V. Nestoridis)\\
chris$\[email protected] (C. Panagiotis)
\end{document} | {"config": "arxiv", "file": "1511.08584.tex"} |
\begin{document}
\title{The ${[46,9,20]_2}$ code is unique}
\author{Sascha Kurz}
\address{Sascha Kurz, University of Bayreuth, 95440 Bayreuth, Germany}
\email{[email protected]}
\abstract{The minimum distance of all binary linear codes with dimension at most eight is known. The smallest open case for
dimension nine is length $n=46$ with known bounds $19\le d\le 20$. Here we present a $[46,9,20]_2$ code and show its uniqueness.
Interestingly enough, this unique optimal code is asymmetric, i.e., it has a trivial automorphism group. Additionally, we show the
non-existence of $[47,10,20]_2$ and $[85,9,40]_2$ codes.\\
\textbf{Keywords:} Binary linear codes, optimal codes\\
}}
\maketitle
\section{Introduction}
An $[n,k,d]_q$-code is a $q$-ary linear code with length $n$, dimension $k$, and minimum Hamming distance $d$. Here we will only consider
binary codes, so that we also speak of $[n,k,d]$-codes. Let $n(k,d)$ be the smallest integer $n$ for which an $[n,k,d]$-code exists. Due to
Griesmer \cite{griesmer1960bound} we have
\begin{equation}
\label{eq_griesmer_bound}
n(k,d)\ge g(k,d):=\sum_{i=0}^{k-1} \left\lceil\frac{d}{2^i}\right\rceil,
\end{equation}
where $\lceil{x}\rceil$ denotes the smallest integer $\ge x$. As shown by Baumert and McEliece \cite{baumert1973note} for every fixed dimension $k$
there exists an integer $D(k)$ such that $n(k,d)=g(k,d)$ for all $d\ge D(k)$, i.e., the determination of $n(k,d)$ is a finite problem for every
fixed dimension $k$. For $k\le 7$, the function $n(k,d)$ has been completely determined by Baumert and McEliece \cite{baumert1973note} and
van Tilborg \cite{van1981smallest}. After a lot of work of different authors, the determination of $n(8,d)$ has been completed by
Bouyukliev, Jaffe, and Vavrek \cite{bouyukhev2000smallest}. For results on $n(9,d)$ we refer e.g.\ to \cite{dodunekov1999some} and the references therein.
The smallest open case for dimension nine is length $n=46$ with known bounds $19\le d\le 20$. Here we present a $[46,9,20]_2$ code and show its uniqueness.
Interestingly enough, this unique optimal code is asymmetric, i.e., it has a trivial automorphism group. Speaking of a $\Delta$-divisible code for
codes whose weights of codewords all are divisible by $\Delta$, we can state that the optimal code is $4$-divisible. $4$-divisible codes are also
called doubly-even and $2$-divisible codes are called even. Additionally, we show the non-existence of $[47,10,20]_2$ and $[85,9,40]_2$ codes.
Our main tools -- described in the next section -- are the standard residual code argument (Proposition~\ref{prop_residual}), the MacWilliams identities
(Proposition~\ref{prop_MacWilliams_identitites}), a result based on the weight distribution of Reed-Muller codes (Proposition~\ref{prop_div_one_more}),
and the software packages \texttt{Q-Extension} \cite{bouyukliev2007q}, \texttt{LinCode} \cite{LinCode} to enumerate linear codes with a list of allowed
weights. For an easy access to the known non-existence results for linear codes we have used the online database \cite{Grassl:codetables}.
\section{Basic tools}
\begin{Definition}
Let $C$ be an $[n,k,d]$-code and $c\in C$ be a codeword of weight $w$. The restriction to the support of $c$ is called the residual code
$\operatorname{Res}(C;c)$ of $C$ with respect to $c$. If only the weight $w$ is of importance, we will denote it by $\operatorname{Res}(C;w)$.
\end{Definition}
\begin{Proposition}
\label{prop_residual}
Let $C$ be an $[n,k,d]$-code. If $d>w/2$, then $\operatorname{Res}(C;w)$ has the parameters
$$
\left[n-w,k-1,\ge d-\left\lfloor w/2\right\rfloor\right].
$$
\end{Proposition}
Some authors call the result for the special case $w=d$ the one-step Griesmer bound.
\begin{Proposition} (\cite{macwilliams1977theory}, MacWilliams Identities)
\label{prop_MacWilliams_identitites}
Let $C$ be an $[n,k,d]$-code and $C^\perp$ be the dual code of $C$. Let $A_i(C)$ and $B_i(C)$ be the number of codewords of weight $i$ in $C$ and
$C^\perp$, respectively. With this, we have
\begin{equation}
\label{eq_MacWilliams}
\sum_{j=0}^n K_i(j)A_j(C)=2^kB_i(C),\quad 0\le i\le n
\end{equation}
where
$$
K_i(j)=\sum_{s=0}^n (-1)^s {{n-j}\choose{i-s}}{{j}\choose{s}},\quad 0\le i\le n
$$
are the binary Krawtchouk polynomials. We will simplify the notation to $A_i$ and $B_i$ whenever $C$ is clear from the context.
\end{Proposition}
Whenever we speak of the first $l$ MacWilliams identities, we mean Equation~(\ref{eq_MacWilliams}) for $0\le i\le l-1$.
Adding the non-negativity constraints $A_i,B_i\ge 0$ we obtain a linear program where we can maximize or minimize certain quantities, which is called the
linear programming method for linear codes. Adding additional equations or inequalities strengthens the formulation.
\begin{Proposition}(\cite[Proposition 5]{dodunekov1999some}, cf.~\cite{simonis1994restrictions})
\label{prop_div_one_more}
Let $C$ be an $[n,k,d]$-code with all weights divisible by $\Delta:=2^a$ and let $\left(A_i\right)_{i=0,1,\dots,n}$ be the weight distribution of $C$. Put
\begin{eqnarray*}
\alpha&:=&\min\{k-a-1,a+1\},\\
\beta&:=&\lfloor(k-a+1)/2\rfloor,\text{ and}\\
\delta&:=&\min\{2\Delta i\,\mid\,A_{2\Delta i }\neq 0\wedge i>0\}.
\end{eqnarray*}
Then the integer
$$
T:=\sum_{i=0}^{\lfloor n/(2\Delta)\rfloor} A_{2\Delta i}
$$
satisfies the following conditions.
\begin{enumerate}
\item \label{div_one_more_case1}
$T$ is divisible by $2^{\lfloor(k-1)/(a+1)\rfloor}$.
\item \label{div_one_more_case2}
If $T<2^{k-a}$, then
$$
T=2^{k-a}-2^{k-a-t}
$$
for some integer $t$ satisfying $1\le t\le \max\{\alpha,\beta\}$. Moreover, if $t>\beta$, then $C$ has an $[n,k-a-2,\delta]$-subcode and if $t\le \beta$, it has an
$[n,k-a-t,\delta]$-subcode.
\item \label{div_one_more_case3}
If $T>2^k-2^{k-a}$, then
$$
T=2^k-2^{k-a}+2^{k-a-t}
$$
for some integer $t$ satisfying $0\le t\le \max\{\alpha,\beta\}$. Moreover, if $a=1$, then $C$ has an $[n,k-t,\delta]$-subcode. If $a>1$, then $C$ has an
$[n,k-1,\delta]$-subcode unless $t=a+1\le k-a-1$, in which case it has an $[n,k-2,\delta]$-subcode.
\end{enumerate}
\end{Proposition}
A special and well-known subcase is that the number of even weight codewords in a $[n,k]$ code is either $2^{k-1}$ or $2^k$.
\section{Results}
\begin{Lemma}
\label{lemma_no_le16_4_7_without_8}
Each $[\le 16,4,7]_2$ code contains a codeword of weight $8$.
\end{Lemma}
\begin{Proof}
Let $C$ be an $[n,4,7]_2$ code with $n\le 16$ and $A_8=0$. From the first two MacWilliams identities we conclude
$$
A_7+A_9+\sum_{i\ge 10} A_i = 2^4-1=15\quad\text{and}\quad
7A_7+9A_9+\sum_{i\ge 10} iA_i = 2^3n =8n,
$$
so that
$$
2A_9+3A_{10}+\sum_{i\ge 11} (i-7)A_i = 8n-105.
$$
Thus, the number of even weight codewords is at most $8n/3-34$. Since at least half the codewords have to be of even weight, we obtain
$n\ge \left\lceil 15.75\right\rceil=16$. In the remaining case $n=16$ we use the linear programming method with the first four MacWilliams identities,
$A_8=0$, $B_1=0$, and the fact that there are exactly $8$ even weight codewords to conclude $A_{11}+\sum_{i\ge 13} A_i <1$, i.e., $A_{11}=0$ and $A_i=0$ for all $i\ge 13$.
With this and rounding to integers we obtain the bounds $5\le B_2\le 6$, which then gives the unique solution $A_7=7$, $A_9=0$, $A_{10}=6$, and $A_{12}=1$. Computing
the full dual weight distribution unveils $B_{15}=-2$, which is negative.
\end{Proof}
\begin{Lemma}
\label{lemma_46_9_20_even}
Each even $[46,9,20]_2$ code $C$ is isomorphic to a code with generator matrix
$$
\begin{pmatrix}
1001010101110011011010001111001100100100000000\\
1111100101010100100011010110011001100010000000\\
1100110100001111101111000100000110101001000000\\
0110101010010110101101110010100011001000100000\\
0011101110101101100100101001010001011000010000\\
0110011001111100011100011000110000111000001000\\
0001111000011100000011111000001111111000000100\\
0000000111111100000000000111111111111000000010\\
0000000000000011111111111111111111111000000001\\
\end{pmatrix}.
$$
\end{Lemma}
\begin{Proof}
Applying Proposition~\ref{prop_residual} with $w=20$ on a $[45,9,20]$ code would give a $[25,8,10]$ code, which does not exist. Thus, $C$ has effective length $n=46$, i.e., $B_1=0$.
Since no $[44,8,20]$ code exists, $C$ is projective, i.e., $B_2=0$. Since no $[24,8,9]$ code exists, Proposition~\ref{prop_residual} yields that $C$ cannot
contain a codeword of weight $w=22$. Assume for a moment that $C$ contains a codeword $c_{26}$ of weight $w=26$ and let $R$ be the corresponding residual $[20,8,7]$
code. Let $c'\neq c_{26}$ be another codeword of $C$ and $w'$ and $w''$ be the weights of $c'$ and $c'+c_{26}$. Then the weight of the corresponding
residual codeword is given by $(w'+w''-26)/2$, so that weight $8$ is impossible in $R$ ($C$ does not contain a codeword of weight $22$). Since
$R$ has to contain a $[\le 16,4,7]_2$ subcode, Lemma~\ref{lemma_no_le16_4_7_without_8} shows the non-existence of $R$, so that $A_{26}=0$.
With this, the first three MacWilliams Identities are given by
\begin{eqnarray*}
A_{20}+A_{24}+A_{28}+A_{30}+\sum_{i=1}^{8} A_{2i+30}&=&511\\
3A_{20}-A_{24}-5A_{28}-7A_{30} -\sum_{i=1}^{8}\left(2i+7\right)\cdot A_{2i+30}&=&-23\\
5A_{20}+21A_{24}-27A_{28}-75A_{30}-\sum_{i=1}^{8}\left(8i^2+56i+75\right)\cdot A_{2i+30}&=&1035.
\end{eqnarray*}
Minimizing $T=A_0+A_{20}+A_{24}+A_{28}+A_{32}+A_{36}+A_{40}+A_{44}$ gives $T\ge \tfrac{6712}{15}>384$, so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3})
gives $T=512$, i.e., all weights are divisible by $4$. A further application of the linear programming method gives that $A_{36}+A_{40}+A_{44}\le \left\lfloor\tfrac{9}{4}\right\rfloor=2$,
so that $C$ has to contain a $[\le 44,7,\{20,24,28,32\}]_2$ subcode.
Next, we have used \texttt{Q-Extension} and \texttt{LinCode} to classify the $[n,k,\{20,24,28,32\}]_2$ codes for $k\le 7$ and
$n\le 37+k$, see Table~\ref{table_4_div_d_20}. Starting from the $337799$ doubly-even $[\le 44,7,20]$ codes,
\texttt{Q-Extension} and \texttt{LinCode} give 424207 doubly-even $[45,8,20]_2$ codes and no doubly-even $[44,8,20]_2$ code
(as the maximum minimum distance of a $[44,8]_2$ code is $19$.) Indeed, a codeword of weight $36$ or $40$
can occur in a doubly-even $[45,8,20]_2$ code. We remark that largest occurring order of the automorphism group is $18$.
Finally, an application of \texttt{Q-Extension} and \texttt{LinCode} on the 424207 doubly-even $[45,8,20]_2$ codes results in the
unique code as stated. (Note that there may be also doubly-even $[45,8,20]_2$ codes with two or more codewords of a weight $w\ge 36$.
However, these are not relevant for our conclusion.)
\end{Proof}
\begin{table}[htp]
\begin{center}
\begin{tabular}{rrrrrrrrrrrrrrrrr}
\hline
k / n & 20 & 24 & 28 & 30 & 32 & 34 & 35 & 36 & 37 & 38 & 39 & 40 & 41 & 42 & 43 & 44 \\
\hline
1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & & & & & & \\
2 & & & & 1 & 1 & 2 & 0 & 3 & 0 & 3 & 0 & & & & & \\
3 & & & & & & & 1 & 1 & 2 & 4 & 6 & 9 & & & & \\
4 & & & & & & & & & & 1 & 4 & 13 & 26 & & & \\
5 & & & & & & & & & & & & 3 & 15 & 163 & & \\
6 & & & & & & & & & & & & & & 24 & 3649 & \\
7 & & & & & & & & & & & & & & & 5 & 337794 \\
\hline
\end{tabular}
\caption{Number of $[n,k,\{20,24,28,32\}]_2$ codes.}
\label{table_4_div_d_20}
\end{center}
\end{table}
We remark that the code of Lemma~\ref{lemma_46_9_20_even} has a trivial automorphism group and weight enumerator $1x^0+235x^{20}+171x^{24}+97x^{28}+8x^{32}$, i.e.,
all weights are divisible by four. The dual minimum distance is $3$ ($A_3^\perp=1$, $A_4^\perp=276$), i.e., the code is projective. Since the Griesmer bound, see
Inequality~(\ref{eq_griesmer_bound}), gives a lower bound of $47$
for the length of a binary linear code with dimension $k=9$ and minimum distance $d\ge 21$, the code has the optimum minimum distance. The linear programming method could also be
used to exclude the weights $w=40$ and $w=44$ directly (and to show $A_{36}\le 2$).
While the maximum
distance $d=20$ was proven using the Griesmer bound directly, the $[46,9,20]_2$ code is not a \textit{Griesmer code}, i.e., where Inequality~(\ref{eq_griesmer_bound}) is satisfied with equality.
For the latter codes the $2^2$-divisibility would follow from \cite[Theorem 9]{ward2001divisible} stating that for Griesmer codes over $\mathbb{F}_p$, where $p^e$ is a divisor of the minimum
distance, all weights are divisible by $p^e$.
\begin{Theorem}
\label{theorem_46_9_20}
Each $[46,9,20]_2$ code $C$ is isomorphic to a code with the generator matrix given in Lemma~\ref{lemma_46_9_20_even}.
\end{Theorem}
\begin{Proof}
Let $C$ be a $[46,9,20]_2$ with generator matrix $G$ which is not even. Removing a column from $G$ and adding a parity check bit gives an even $[46,9,20]_2$ code. So,
we start from the generator matrix of Lemma~\ref{lemma_46_9_20_even} and replace a column by all $2^9-1$ possible column vectors. Checking all $46\cdot 511$ cases
gives either linear codes with a codeword of weight $19$ or the generator matrix of Lemma~\ref{lemma_46_9_20_even} again.
\end{Proof}
\begin{Lemma}
\label{lemma_no_47_10_20}
No $[47,10,20]_2$ code exists.
\end{Lemma}
\begin{Proof}
Assume that $C$ is a $[47,10,20]_2$ code. Since no $[46,10,20]_2$ and no $[45,9,20]_2$ code exists, we have $B_1=0$ and $B_2=0$, respectively.
Let $G$ be a systematic generator matrix of $C$. Since removing the $i$th unit vector and the corresponding column (with the $1$-entry) from $G$ gives a $[46,9,20]_2$ code,
there are at least $1023$ codewords in $C$ whose weight is divisible by $4$. Thus, Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}) yields that $C$ is doubly-even.
By Theorem~\ref{theorem_46_9_20} we have $A_{32}\ge 8$. Adding this extra inequality to the linear inequality system of the first four MacWilliams identities gives, after rounding
down to integers, $A_{44}=0$, $A_{40}=0$, $A_{36}=0$, and $B_{3}=0$. (We could also conclude $B_3=0$ directly from the non-existence of a $[44,8,20]_2$-code.) The unique
remaining weight enumerator is given by $1x^0+418x^{20}+318x^{24}+278x^{28}+9x^{32}$. Let $C$ be such a code and $C'$ be the code generated by the nine codewords of weight
$32$. We eventually add codewords from $C$ to $C'$ till $C'$ has dimension exactly nine and denote the corresponding code by $C''$. Now the existence of $C''$ contradicts
Theorem~\ref{theorem_46_9_20}.
\end{Proof}
So, the unique $[46,9,20]_2$ code is strongly optimal in the sense of \cite[Definition 1]{simonis200023}, i.e., no $[n-1,k,d]_2$ and no $[n+1,k+1,d]_2$ code exists. The strongly
optimal binary linear codes with dimension at most seven have been completely classified, except the $[56,7,26]_2$ codes, in \cite{bouyukliev2001optimal}.
The next open case is the existence question for a $[65,9,29]_2$ code, which is equivalent to the existence of a $[66,9,30]_2$ code. The technique
of Lemma~\ref{lemma_46_9_20_even} to conclude the $4$-divisibility of an optimal even code can also be applied in further cases and we given an example for
$[78,9,36]_2$ codes, whose existence is unknown.
\begin{Lemma}
\label{lemma_no_le33_5_15_without_16}
Each $[\le 33,5,15]_2$ code contains a codeword of weight $16$.
\end{Lemma}
\begin{Proof}
We verify this statement computationally using \texttt{Q-Extension} and \texttt{LinCode}.
\end{Proof}
We remark that a direct proof is possible too. However, the one that we found is too involved to be presented here. Moreover, there are exactly $3$ $[\le 32,4,15]_2$ codes without
a codeword of weight $16$.
\begin{Lemma}
If an even $[78,9,36]_2$ code $C$ exists, then it has to be doubly-even.
\end{Lemma}
\begin{Proof}
Since no $[77,9,36]_2$ and no $[76,8,36]_2$ code exists, we have $B_1=0$ and $B_2=0$. Proposition~\ref{prop_residual} yields that $C$ does not contain a codeword of weight $38$.
Assume for a moment that $C$ contains a codeword $c_{42}$ of weight $w=42$ and let $R$ be the corresponding residual $[36,8,15]_2$ code. Let $c'\neq c_{42}$ be another codeword
of $C$ and $w'$ and $w''$ be the weights of $c'$ and $c'+c_{42}$. Then the weight of the corresponding residual codeword is given by $(w'+w''-42)/2$, so that weight $16$ is
impossible in $R$ ($C$ does not contain a codeword of weight $38$). Since $R$ has to contain a $[\le 33,5,15]_2$ subcode, Lemma~\ref{lemma_no_le33_5_15_without_16} shows the
non-existence of $R$, so that $A_{42}=0$.
We use the linear programming method with the first four MacWilliams identities. Minimizing the number $T$ of doubly-even codewords gives
$T\ge \tfrac{1976}{5}>384$, so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}) gives $T=512$, i.e., all weights are divisible by $4$.
\end{Proof}
Two cases where $8$-divisibility can be concluded for optimal even codes are given below.
\begin{Theorem}
\label{theorem_no_85_9_40}
No $[85,9,40]_2$ code exists.
\end{Theorem}
\begin{Proof}
Assume that $C$ is a $[85,9,40]_2$ code. Since no $[84,9,40]_2$ and no $[83,8,40]_2$ code exists, we have $B_1=0$ and $B_2=0$, respectively. Considering the residual
code, Proposition~\ref{prop_residual} yields that $C$ contains no codewords with weight $w\in\{42,44,46\}$. With this, we use the first four MacWilliams identities and
minimize $T=A_0+\sum_{i=10}^{21} A_{4i}$. Since $T\ge 416>384$, so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}) gives $T=512$, all weights
are divisible by $4$. Minimizing $T=A_0+\sum_{i=5}^{10} A_{8i}$ gives $T\ge 472>384$, so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}) gives $T=512$,
i.e., all weights are divisible by $8$. The residual code of each codeword of weight $w$ is a projective $4$-divisible code of length $85-w$. Since no such codes of lengths
$5$ and $13$ exist, $C$ does not contain codewords of weight $80$ or $72$, respectively.\footnote{We remark that a $4$-divisible non-projective binary linear code of length
$13$ exists.}
The residual code $\hat{C}$ of a codeword of weight $64$ is a projective $4$-divisible $8$-dimensional code of length $21$. Note that $\hat{C}$ cannot contain
a codeword of weight $20$ since no even code of length $1$ exists. Thus we have $A_{64}\le 1$.
Now we look at the two-dimensional subcodes of the unique codeword of weight $64$ and two other codewords. Denoting their weights
by $a$, $b$, $c$ and the weight of the corresponding codeword in $\hat{C}$ by $w$ we use the notation $(a,b,c;w)$. W.l.o.g.\ we assume $a=64$, $b\le c$
and obtain the following possibilities: $(64,40,40;8)$, $(64,40,48;12)$, $(64,40,56;16)$, and $(64,48,48;16)$. Note that $(64,48,56;20)$ and $(64,56,56;24)$ are impossible.
By $x_8$, $x_{12}$, $x_{16}'$, and $x_{16}''$ we denote the corresponding counts. Setting $x_{16}=x_{16}'+x_{16}''$, we have that $x_i$ is the number of codewords of weight
$i$ in $\hat{C}$. Assuming $A_{64}=1$ the unique (theoretically) possible weight enumerator is $1x^0+360x^{40}+138x^{48}+12x^{56}+1x^{64}$. Double-counting gives
$A_{40}=360=2x_8+x_{12}+x_{16}'$, $A_{48}=138=x_{12}+2x_{16}''$, and $A_{56}=12=x_{16}'$. Solving this equation system gives
$x_{12}=348-2x_8$ and $x_{16}=x_8-93$. Using the first four MacWilliams identities for $\hat{C}$ we obtain the unique solution $x_8={102}$, $x_{12}=144$, and $x_{16}=9$,
so that $x_{16}''=9-12=-3$ is negative -- contradiction. Thus, $A_{64}=0$ and the unique (theoretically) possible weight enumerator is given by
$1x^0+361x^{40}+135x^{48}+15x^{56}$ ($B_3=60$).
Using \texttt{Q-Extension} and \texttt{LinCode} we classify all $[n,k,\{40,48,56\}]_2$ codes for $k\le 7$ and $n\le 76+k$, see Table~\ref{table_8_div_40_48_56}.
For dimension $k=8$, there is no $[83,8,\{40,48,56\}]_2$ code and exactly 106322 $[84,8,\{40,48,56\}]_2$ codes. The latter codes have weight enumerators
$$1x^0+(186+l)x^{40}+(69-2l)x^{48}+lx^{56}$$ ($B_2=l-3$), where $3\le l\le 9$. The corresponding counts are given in Table~\ref{table_per_56}. Since the next step would need
a huge amount of computation time we derive some extra information on a $[84,8,\{40,48,56\}]_2$-subcode of $C$. Each of the $15$ codewords of weight $56$ of $C$
hits $56$ of the columns of a generator matrix of $C$, so that there exists a column which is hit by at most $\left\lfloor 56\cdot 15/85\right\rfloor=9$ such codewords.
Thus, by shortening of $C$ we obtain a $[84,8,\{40,48,56\}]_2$-subcode with at least $15-9=6$ codewords of weight $56$. Extending the corresponding $5666$ cases with
\texttt{Q-Extension} and \texttt{LinCode} results in no $[85,9,\{40,48,56\}]_2$ code. (Each extension took between a few minutes and a few hours.)
\end{Proof}
\begin{table}[htp]
\begin{center}
\begin{tabular}{rrrrrrrrrrrrrrrrrrr}
\hline
k / n & 40 & 48 & 56 & 60 & 64 & 68 & 70 & 72 & 74 & 75 & 76 & 77 & 78 & 79 & 80 & 81 & 82 & 83 \\
\hline
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & & & & & & \\
2 & & & & 1 & 1 & 2 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & & & & & \\
3 & & & & & & & 1 & 1 & 2 & 0 & 3 & 0 & 5 & 0 & & & & \\
4 & & & & & & & & & & 1 & 1 & 2 & 3 & 6 & 10 & & & \\
5 & & & & & & & & & & & & & 1 & 3 & 11 & 16 & & \\
6 & & & & & & & & & & & & & & & 2 & 8 & 106 & \\
7 & & & & & & & & & & & & & & & & & 7 & 5613 \\
\hline
\end{tabular}
\caption{Number of $[n,k,\{40,48,56\}]_2$ codes.}
\label{table_8_div_40_48_56}
\end{center}
\end{table}
\begin{table}[htp]
\begin{center}
\begin{tabular}{rrrrrrrr}
\hline
$A_{56}$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
& 25773 & 48792 & 26091 & 5198 & 450 & 17 & 1\\
\hline
\end{tabular}
\caption{Number of $[84,8,\{40,48,56\}]_2$ codes per $A_{56}$.}
\label{table_per_56}
\end{center}
\end{table}
\begin{Lemma}
\label{lemma_no_le51_4_23_without_24_25_26}
Each $[\le 47,4,23]_2$ code satisfies $A_{24}+A_{25}+A_{26}\ge 1$.
\end{Lemma}
\begin{Proof}
We verify this statement computationally using \texttt{Q-Extension} and \texttt{LinCode}.
\end{Proof}
We remark that there a 1 $[44,3,23]_2$, 3 $[45,3,23]_2$, and 9 $[46,3,23]_2$ codes without codewords of a weight in $\{24,25,26\}$.
\begin{Lemma}
\label{lemma_no_even_le46_5_22_without_24}
Each even $[\le 46,5,22]_2$ code contains a codeword of weight $24$..
\end{Lemma}
\begin{Proof}
We verify this statement computationally using \texttt{Q-Extension} and and \texttt{LinCode}.
\end{Proof}
We remark that there a 2 $[44,4,22]_2$ and 6 $[45,4,22]_2$ codes that are even and do not contain a codeword of weight $24$.
\begin{Theorem}
If an even $[117,9,56]_2$ code $C$ exist, then the weights of all codewords are divisible by $8$.
\end{Theorem}
\begin{Proof}
From the known non-existence results we conclude $B_1=0$ and $C$ does not contain codewords with a weight in $\{58,60,62\}$. If $C$ would contain a
codeword of weight $66$ then its corresponding residual code $R$ is a $[51,8,23]_2$ code without codewords with a weight in $\{24,25,26\}$, which
contradicts Lemma~\ref{lemma_no_le51_4_23_without_24_25_26}. Thus, $A_{66}=0$. Minimizing the number $T_4$ of doubly-even codewords
using the first four MacWilliams identities gives $T_4\ge\frac{2916}{7}>384$, so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3})
gives $T_4=512$, i.e., all weights are divisible by $4$.
If $C$ contains no codeword of weight $68$, then the number $T_8$ of codewords whose weight is divisible by $8$ is at least $475.86>448$,
so that Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}) gives $T_8=512$, i.e., all weights are divisible by $8$. So, let us assume
that $C$ contains a codeword of weight $68$ and consider the corresponding residual $[49,8,22]_2$ code $R$. Note that $R$ is even and does not contain
a codeword of weight $24$, which contradicts Lemma~\ref{lemma_no_even_le46_5_22_without_24}. Thus, all weights are divisible by $8$.
\end{Proof}
\begin{Proposition}
If an even $[118,10,56]_2$ code exist, then its weight enumerator is either $1x^0+719x^{56}+218x^{64}+85x^{72}+1x^{80}$ or $1x^0+720x^{56}+215x^{64}+88x^{72}$.
\end{Proposition}
\begin{Proof}
Assume that $C$ is an even $[118,10,56]_2$ code. Since no $[117,10,56]_2$ and no $[116,9,56]_2$ code exists we have $B_1=0$ and $B_2=0$, respectively. Using
the known upper bounds on the minimum distance for $9$-dimensional codes we can conclude that no codeword as a weight $w\in\{58,60,62,66,68,70\}$. Maximizing
$T=\sum_i A_{4i}$ gives $T\ge 1011.2>768$, so that $C$ is $4$-divisible, see Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}). Maximizing $T=\sum_i A_{8i}$
gives $T\ge 1019.2>768$, so that $C$ is $8$-divisible, Proposition~\ref{prop_div_one_more}.(\ref{div_one_more_case3}).
Maximizing $A_{i}$ for $i\in \{88,96,104,112\}$ gives a value strictly less than $1$, so that the only non-zero weights can be $56$, $64$, $72$, and $80$.
Maximizing $A_{80}$ gives an upper bound of $\frac{3}{2}$, so that $A_{80}=1$ or $A_{80}=0$. The remaining values are then uniquely determined by the first four
MacWilliams identities.
\end{Proof}
The exhaustive enumeration of all $[117,9,\{56,64,72\}]_2$ codes remains a computational challenge. While we have constructed a few thousand non-isomorphic
$[115,7,\{56,64,72\}]_2$ codes, we still do not know whether a $[117,9,56]_2$ code exists. | {"config": "arxiv", "file": "1906.02621.tex"} |
TITLE: Prove that **PTIME** has no complete problems with respect to linear-time reductions.
QUESTION [1 upvotes]: Prove that PTIME has no complete problems with respect to
linear-time reductions.
I know that PTIME means all these problems that we can solve them in polynomial time. I suppose that I should use here time hierarchy theorem - I know it.
However, I have no idea how to solve it. I am beginner at this subject so I ask for explanation for dummy :)
REPLY [2 votes]: Suppose that you have a PTIME-complete problem $A$, with some algorithm that runs in $O(n^k)$ time.
This means, by definition, that every problem in PTIME can be decided by first running a linear-time reduction to $A$, and then running your supposed $O(n^k)$ solver for $A$ on the result.
Can you see how this contradicts the time hierarchy theorem?
(Hint. How large an $A$-instance does the linear-time reduction even have time to print?) | {"set_name": "stack_exchange", "score": 1, "question_id": 2196898} |
TITLE: free product with amalgamation is correspondingly a pushout
QUESTION [5 upvotes]: I'm trying to proof that the following diagram in the category of groups with $i_1$ and $i_2$ being inclusions is a pushout iff $G$ is the free product with amalgamation (up to isomorphism). It should be sufficient to proof $"\Leftarrow"$ since the pushout is unique up to isomorphism. How does the proof of $"\Leftarrow"$ go?
\begin{array}{ccc}
G_0 & \xrightarrow{i_1} & G_1 \\
\downarrow_{i_2} & & \downarrow_{\varphi_1}\\
G_2 & \xrightarrow{\varphi_2} & G
\end{array}
I tried defining the unique homomorphism of the pushout by
$$\phi((g_1*...*g_n)N) = h_i(g_1)*...*h_i(g_n)$$
but couldn't proof it's well-definedness.
REPLY [6 votes]: Just think of $i_1$ and $i_2$ as inclusion, even so there are not necessarily. Given two maps $h_i \colon G_i \to H, i \in \{1,2\}$, there is an induced one $h \colon G_1 \ast G_2 \to H$ defined as $h_i$ when restricting to the letters of $G_i$. (This is clearly well-defined.)
Now, what happens when $h_1$ and $h_2$ agree on $G_0$ ? It means that $h_1(x)=h_2(x)$ for any $x \in G_0$. In other word, $\ker (h)$ contains all the $x^{(1)}\ast{x^{(2)}}^{-1}$ for $x \in G_0$ (where the parenthesized exponent indicates where the letter $x$ comes from) and so also the normal subgroup generated by those. It exactly means that you can factor $h$ by the quotient
$$ G_1 \ast_{G_0} G_2 = (G_1 \ast G_2) \mathop{\big/} \left\langle g \left(x^{(1)}\ast {x^{(2)}}^{-1}\right)g^{-1} : x\in G_0, g \in G_1 \ast G_2 \right\rangle .$$
QED.
Now just do the same but taking into account $i_1,i_2$. Hint : it does not change much ! | {"set_name": "stack_exchange", "score": 5, "question_id": 1261767} |
TITLE: Number of totally ramified extensions of $\mathbb{Q}_p$ of degree $n$
QUESTION [2 upvotes]: I just read the proof of this theorem that $\mathbb{Q}_p$ has finitely many totally ramified extensions of any degree $n$. The proof uses Krasner's lemma and the compactness of a space which corresponds to Eisenstein polynomials of degree $n$. One then picks a finite subcover which represents every possible such extension.
This proof technique is not very useful if one actually wants to count the number of totally ramified extensions of a particular degree $n$. Does anyone know of any actual methods for computing this?
REPLY [5 votes]: Krasner apparently also derived a formula for the number of totally ramified extensions with a certain discriminant. This works over a finite extension of $\mathbb Q_p$, but we'll stick with the base case for simplicity. The following is from sections 3 and 6 of this paper.
Let $j=an+b$, with $0\le b<n$, and assume $j$ satisfies Ore's condition:
$${\rm min}({\rm ord}_p(b)n,{\rm ord}_p(n)n)\le j\le {\rm ord}_p(n)n$$
Then the number of totally ramified extensions of $\mathbb Q_p$ of degree $n$ and discriminant $p^{n+j-1}$ is
$$\cases{np^{\sum_{i=1}^an/p^i} & b=0\cr n(p-1)p^{\sum_{i=1}^an/p^i+\lfloor (b-1)/p^{a+1}\rfloor}&b>0}$$
Note that if $j$ does not satisfy Ore's condition, there is not a totally ramified extension of degree $n$ and discriminant $p^{n+j-1}$. | {"set_name": "stack_exchange", "score": 2, "question_id": 78211} |
TITLE: Discontinuous linear operator e and its core
QUESTION [1 upvotes]: Let $T:E\to \mathbb{R}$ nonzero linear operator with $E$ vector space. So, show the following equivalence: $T$ is discontinuous if and only if $Ker(T)$ is dense in $E$.
REPLY [2 votes]: I was trying to come up with something elegant, but all I could do was get my hands dirty, so...
First, clearly the only continuous linear functional with a dense kernel is the zero functional. For the other direction, assume there exists some $v\in E$ and a balanced zero-neighborhood $U\subset E$ such that $(v+U)\cap\ker T=\emptyset$.
Note, then, that for all $u\in U$, one has $|Tu|<|Tv|$: if there exists some $u\in U$ with $|Tu|\geq |Tv|$ then $\left|-\frac{Tv}{Tu}\right|\leq 1$ hence $-\frac{Tv}{Tu}u\in U$, but then $v-\frac{Tv}{Tu}u\in\ker T$ in contradiction.
This implies that $T$ is continuous at zero (and due to linearity everywhere): for all $\epsilon>0$ there exists a zero neighborhood $V=\frac{\epsilon}{Tv}U$ such that $u\in V\implies |Tu-T0|<\epsilon$.
N.B.
You didn't indicate if $E$ has the structure of a normed vector space, so I didn't assume that. If you're unfamiliar with topological vector spaces, try to translate the argument to metric-space terminology (which amounts to choosing $U$ appropriately). | {"set_name": "stack_exchange", "score": 1, "question_id": 513753} |
TITLE: Mass equals Moment of inertia when constant density?
QUESTION [0 upvotes]: I have found equation for moment of inertia $(J)$. I'm calculating $J$ for hemisphere, with rotational axis $Z$.
$$ J = \iiint\limits_V r^2 \cdot \rho \cdot dV $$
But if $\rho$ is constant (homogenous), I can do:
$$ J = \rho \cdot \iiint\limits_V r^2 \cdot dV $$
Which is:
$$ J = \rho \cdot V $$
$$ J = m $$
Am I right?
REPLY [1 votes]: As others have said $$\iiint _V r^2 dV \neq V$$
The volume element $dV$ in cylindrical coordinates (which is a convenient coordinate system for moment of inertia calculations) is:
$$ dV = r \, dr \, d\theta \, dz$$
There are 3 integrands ($r,\theta,z$) so you will need to integrate each with appropriate boundaries that describe a hemisphere (be careful as the $r$ above is not the radius of the hemisphere, but it is the $r$ in your equation!) | {"set_name": "stack_exchange", "score": 0, "question_id": 61170} |
TITLE: For $a,b,c$. Prove that $\frac{a^2}{a+\sqrt[3]{bc}}+\frac{b^2}{b+\sqrt[3]{ca}}+\frac{c^2}{c+\sqrt[3]{ab}}\ge\frac{3}{2}$
QUESTION [0 upvotes]: For $a,b,c$ are positive real number satisfy $a+b+c=3$. Prove that $$\dfrac{a^2}{a+\sqrt[3]{bc}}+\dfrac{b^2}{b+\sqrt[3]{ca}}+\dfrac{c^2}{c+\sqrt[3]{ab}}\ge\dfrac{3}{2}$$
By Cauchy-Schwarz: $\Rightarrow\dfrac{a^2}{a+\sqrt[3]{bc}}+\dfrac{b^2}{b+\sqrt[3]{ca}}+\dfrac{c^2}{c+\sqrt[3]{ab}}\ge\dfrac{\left(a+b+c\right)^2}{a+b+c+\sqrt[3]{bc}+\sqrt[3]{ca}+\sqrt[3]{ab}}$
Hence we need prove $\dfrac{9}{3+\sqrt[3]{bc}+\sqrt[3]{ca}+\sqrt[3]{ab}}\ge\dfrac{3}{2} (*)$
$\Leftrightarrow9\ge3\sqrt[3]{ab}+3\sqrt[3]{bc}+3\sqrt[3]{ca}$
By AM-GM $a+b+1\geq 3\sqrt[3]{ab}$ and $...$
$\Rightarrow2\left(a+b+c\right)+3\ge3\sqrt[3]{ab}+3\sqrt[3]{bc}+3\sqrt[3]{ca}$
$\Rightarrow9\ge3\sqrt[3]{ab}+3\sqrt[3]{bc}+3\sqrt[3]{ca}$
I need a new method.
REPLY [2 votes]: By AM-GM and C-S we obtain:
$$\sum_{cyc}\frac{a^2}{a+\sqrt[3]{bc}}\geq\sum_{cyc}\frac{a^2}{a+\frac{b+c+1}{3}}=\sum_{cyc}\frac{9a^2}{3(3a+b+c)+a+b+c}=$$
$$=\sum_{cyc}\frac{9a^2}{10a+4b+4c}\geq\frac{9(a+b+c)^2}{\sum\limits_{cyc}(10a+4b+4c)}=\frac{9(a+b+c)^2}{18(a+b+c)}=\frac{3}{2}$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2177039} |
TITLE: Why does brushless motors heat up?
QUESTION [0 upvotes]: I would like to know what makes a brushless motor heat up. I'm so aware of the problems generating the high temperetature such as wires contact problems or over loaded motors. I want to know what makes a normal motor in normal conditions get warm, is it related to Joule effect?
REPLY [2 votes]: In an electric motor the current flows through wires with a finite resistance. Thus you get Joule heating. Furthermore, depending on the design of the motor, you can have electric eddy currents induced by the changing magnetic field in any conducting material. This also produces heat. | {"set_name": "stack_exchange", "score": 0, "question_id": 384563} |
TITLE: Norm estimate for a product of two orthogonal projectors
QUESTION [2 upvotes]: Let $H$ denote a Hilbert space.
Consider two orthogonal projectors $\,P,Q\in\mathscr L(H)\,$ such that $H=\operatorname{Im}P\oplus\operatorname{Im}Q\,,$ that is
both $\,\operatorname{Im}Q\,$ and $\,\operatorname{Im}P\,$ are closed subspaces of $H$,
$\operatorname{Im}P+\operatorname{Im}Q=H$,
$\{0\}=\operatorname{Im}P\cap\operatorname{Im}Q\,$.
It is not assumed that $\operatorname{Im}P\perp\operatorname{Im}Q$, or equivalently $P+Q=\mathbb 1$.
Is it true then that $\|PQ\|<1$ ?
Note that $\|PQ\|=\|QP\|$ as the involution is isometric.
This is a follow-up question to
A "Crookedness criterion" for a pair of orthogonal projectors? .
Its answer shows that it is necessary to assume that
$\,\operatorname{Im}P+\operatorname{Im}Q\,$ is closed in $H$.
REPLY [0 votes]: The norm estimate does indeed hold.
I was told that S. Afriat proved it in the mid-1950s, the reference is in the footnote. Actually, I haven't seen the paper, but received a cue, and here's the reasoning.
Given the direct sum decomposition $H=\operatorname{Im}P\oplus\operatorname{Im}Q\,,$
let $Z\in\mathscr L(H)\,$ denote the projector onto the second summand. In general, this projection onto $\operatorname{Im}Q\,$ along $\operatorname{Im}P\,$ is oblique. And along that the hypothesis that the direct sum decomposition is not trivial is made, i.e., neither $\operatorname{Im}Q\,$ nor $\operatorname{Im}P\,$ equals $\{0\}$, corresponding to the cases $Z=0$ or $Z=1$, respectively. Then one has
$$\|PQ\|\:\leqslant\:\sqrt{1-\|Z\|^{-2}}\;.$$
Proof:$\ $ If $x\in\operatorname{Im}Q\,$ is a unit vector, then $\,Zx=x\,$ and $\,ZPx=0$. Thus,
$$1=\|Zx-ZPx\|\:\leqslant\:\|Z\|\: \|(1-P)x\|\\[2ex]
\Longrightarrow\quad\frac1{\|Z\|^2} \:\leqslant\: \big\langle(1-P)x\mid (1-P)x\big\rangle
= \big\langle(1-P)x\mid x\big\rangle
= 1-\|Px\|^2\qquad\text{or}\quad\|Px\|\;\leqslant\; \sqrt{1-\|Z\|^{-2}}$$
The claim now follows from
$$\|PQ\|\:=\:\sup_{u\in H,\,\|u\|\leq 1}\|PQu\|
\:=\:\sup_{x\in\operatorname{Im}Q,\,\|x\|\leqslant 1}\|Px\|\:.$$
Remark:$\ $ Every non-zero projector satisfies $\|Z\|\geqslant1$.
And $\|Z\|=1$ holds if and only if $Z$ is an orthogonal projector.
Sydney N. Afriat "Orthogonal and oblique projectors and the characteristics of pairs of vector spaces", Proc. Cambridge Philos. Soc. 53, 1957 | {"set_name": "stack_exchange", "score": 2, "question_id": 3092323} |
TITLE: What is the name of this digraph created from other digraphs?
QUESTION [5 upvotes]: Let $D_1, D_2, ..., D_n$ be digraphs of various sizes and let $C$ be a diagraph with $n$ vertices $\{1,...,n\}$. Construct a new digraph, $D$, whose vertex set is $V(D) = V(D_1) \cup \cdots \cup V(D_n)$ and whose edge set is defined by $u \to v$ if and only if one of the two conditions hold:
$\quad$(a) $u,v \in V(D_i) \text{ and } u \to v \text{ in } D_i$
$\quad$(b) $\exists i \neq j$ such that $u \in V(D_i), v\in V(D_j)\text{ and } i \to j \text{ in } C$.
In other words, $D$ is the digraph obtained by replacing the vertices of $C$ with the diagraphs $D_1,D_2,...,D_n$.
Question: Is there a canonical name/notation for this process? I was digging through the Wikipedia article Graph operations to find something similar, but to no avail (perhaps I just missed it). In particular, I'm interested in when $D_1, D_2, ..., D_n$ are strongly connected and $C$ is transitive. Then the condensation of $D$ is $C$. My idea is to "reverse" the process of condensation.
REPLY [1 votes]: I finally found a canonical name for this process in this article: Hamiltonian tournaments with the least number of $3$-cycles. The author calls $D = C(D_1,D_2,...,D_n)$ the composition of the $n$ components $M_1,M_2,...,M_n$ with the quotient $C$. | {"set_name": "stack_exchange", "score": 5, "question_id": 4360185} |
TITLE: Torsion Modules Annihilators
QUESTION [2 upvotes]: Let $R$ be a PID, let $B$ be a torsion $R$-module, and let $p$ be a prime in $R$. Show that if $pb = 0$ for some nonzero $b \in B$, then $\operatorname{Ann}(B)$ is a subset of $(p)$.
Well we can let $\operatorname{Ann}(B) = (t)$ for some $t \in R$ since it is an ideal in $R$. How to make use of the fact that $B$ is torsion (i.e. there is r in R such that rm=0 for each m in M) to prove the desired result?
REPLY [1 votes]: Hint: Since $\operatorname{Ann}(B)\subseteq\operatorname{Ann}(b)$, it suffices to show $\operatorname{Ann}(b)=(p)$.
A stronger hint is hidden below.
Since $pb=0$, $(p)\subseteq\operatorname{Ann}(b)$. That is, $\operatorname{Ann}(b)$ is an ideal which contains $(p)$. What ideals are there which contain $(p)$? | {"set_name": "stack_exchange", "score": 2, "question_id": 2179743} |
TITLE: Are coherent states destroyed by measurements?
QUESTION [2 upvotes]: For a quantum harmonic oscillator in a coherent superposition, what happens if the energy is measured? Would it collapse to an energy eigenstate (a single excitation) corresponding to the result of the measurement, thus destroying the coherence? Similarly, for a quantum field in a coherent state (classical EM field wave), does the entire field configuration collapse to a single eigenstate after the field is measured? Wouldn't this destroy the coherence?
EDIT: I do realize that it should collapse, but I'm not sure how to interpret pictures of coherent state measurements, such as this one from Wikipedia ("Coherent State" article):
Shouldn't the coherent state collapse after the very first measurement (at t=0), thus destroying the "classical wave" pattern for the rest of the measurements? Yet it looks classical in this picture...
REPLY [0 votes]: If the wavefunction can collapse for position measurements then why can't coherence be destroyed by energy measurements? It's approximately the same reason. | {"set_name": "stack_exchange", "score": 2, "question_id": 567697} |
TITLE: Does the shuffling of a sequence of measurements produce an i.i.d. sequence?
QUESTION [0 upvotes]: Given a stationary sequence of measurements, say, of response times (i.e., sojourn times), of a queue (e.g., M/M/1), if we randomly reshuffle such samples, will we get a sequence of i.i.d. samples? If so, how to prove it?
REPLY [2 votes]: If I understand your question correctly, you have a sequence $X_1, X_2, \ldots$ of random variables from some joint random distribution. You require this to be stationary: ie for any $k>0$, the joint distribution of $(X_i,\ldots, X_{i+k-1})$ is the same for all $i$.
Now, you randomly shuffle the $X_i$ into a new sequence, say $Y_1, Y_2,\ldots$, and ask if this makes the $Y_i$ independent.
It's not fully clear to me how to shuffle an infinite sequence. The best I can come up with is to take a sequence of length $n$, shuffle this, and then see if the limit is a well-defined probability distribution. For the sake of argument, let's assume it does. (Maybe someone can elaborate on this?)
Clement C has already in the comment given an example of how shuffling a finite sequence need not give iid: essentially by taking a joint distribution where the sum $X_1+\cdots+X_n$ is constant. However, this might still result in iid in the limit of $n\rightarrow\infty$.
As a counter-example for an infinite sequence of variables, assume $X_i\sim F_\Theta$ for some random distribution $F_\Theta$ depending on a parameter $\Theta$, where $\Theta$ itself is a random variable. For example, $X_i\sim N(U,\sigma_X^2)$ where $U\sim N(\mu_U,\sigma_U^2)$. This is the kind of models that arise in multi-level models and empirical Bayes. Now, the conditional distribution of $(X_1, X_2, \ldots)|U$, ie conditional on $U$ is iid. However, because of their joint dependence on $U$, the $X_i$ are not iid.
However, the example above, as well as the randomised sequence of observations from your question (if well-defined), are examples of exchangable sequences of variables. This simply means that the distribution is invariant under any reordering. | {"set_name": "stack_exchange", "score": 0, "question_id": 2965518} |
TITLE: If sin B = 4/5 with B in Q1, find the following. sin (B/ 2)
QUESTION [0 upvotes]: If
$\sin B = 4/5$
with $B$ in Q1, find the following:
$\sin (B/2)$.
Well, I know the formula is $$\sin{\frac{b}{2}} = \sqrt{\frac{1+ \cos A}{2}}$$ and I know its a $3$-$4$-$5$ triangle. I keep getting $\frac{2\sqrt{5}}{5}$.
REPLY [0 votes]: If B is in Q1, the half angle formula for sine is:$$sin(B/2)=\sqrt{(1-cos(B))/2}$$
$$cos^2(B)=1-sin^2(B)=1-16/25=9/25$$
So $$sin(B/2)=\sqrt{(1-\sqrt{9/25})/2}=\frac{1}{\sqrt{5}}$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2495734} |
\chapter{Complexes with \texorpdfstring{$\psi$}{psi}-operator}\label{chapter Complex with psi-operator}
In this chapter, we construct a complex with $\psi$ operator (similar as in \cite[\S 3]{Her98}) that computes the continuous Galois cohomology.
\medskip
As $F_{\tau}$ is perfect, we cannot have such an operator on $(\varphi, \tau)$-modules over $(\Oo_{\Ee}, \Oo_{\Ee_{\tau}})$ (\cf remark \ref{rem why we need unperfected version}): we have to use a refinement of $(\varphi, \tau)$-module theory developped in \cite[1.2.2]{Car13}. More precisely, we will work with coefficients $\Oo_{\Ee_{u, \tau}},$ whose residue field is not perfect ($\cf$ notation \ref{not dictionary}).
\medskip
We construct a complex $\Cc^{u}_{\varphi, \tau}$ in section \ref{section C varphi tau u} and we show that it computes the continuous Galois cohomology. Replacing the operator $\varphi$ with $\psi$ in $\Cc^{u}_{\varphi,\tau}$ provides another complex $\Cc^{u}_{\psi,\tau}$ and we show that these two complexes are quasi-isomorphic. Hence $\Cc^{u}_{\psi,\tau}$ computes the continuous Galois cohomology.
\medskip
To prove the results mentioned above, we will as usual start with $\FF_p$-representations and then pass to $\ZZ_p$-representations by d\'evissage.
\section{The \texorpdfstring{$(\varphi, \tau)$}{(phi,tau)}-modules over partially unperfected coefficients}\label{section modules over unperfected coefficients}
\medskip
For simplicity, we will denote by $u$ and $\eta$ the elements $\widetilde{\pi}$ and $\varepsilon-1$ in $C^\flat$ (this is a little abuse of notation since, strictly speaking, $u$ is a variable that maps to $\widetilde{\pi}$ under the injective map $F_0 \to C^\flat$ and similarly for $\eta$) or the elements $[\widetilde{\pi}]$ and $[\varepsilon]-1$ in $\W(C^\flat)$ under the injective map $\Oo_{\Ee}\to \W(C^{\flat}).$
\begin{notation}($\cf$ \cite[\S 1.2.2]{Car13})\label{rem fixed finite degree}
\item (1) We put \[ F_{u, \tau}:= k(\!(u, \eta^{1/p^{\infty}})\!)=k(\!(u, \eta )\!)\big[\eta^{1/p^{\infty}}\big]=\bigcup_{n\in \NN}k(\!(u, \eta)\!)[\eta^{1/p^n}]\subset C^{\flat}. \]\index{$F_{u, \tau}$}
\medskip
\item (2) By an abuse of notation we denote
\[ C_{u-\np}^{\flat}=F_{u,\tau}^{\sep}\subset C^{\flat} \]\index{$C_{u-\np}^{\flat}$}
the separable closure of $F_{u, \tau}$ in $C^{\flat}.$
\medskip
Note that $C^{\flat}_{u-\np}$ is \emph{not} the tilt of a perfectoid field, though ambiguously it carries a superscript $\flat$ in the notation.
\item (3) We put \[\mathsf{G}:=\Gal(F_{u,\tau}^{\sep}/ F_{u, \tau}).\] \index{$\mathsf{G}$}
\end{notation}
\begin{lemma}\label{lemm for injectivity of lambda}
The group $\mathsf{G}$ acts isometrically over $F_{u, \tau}^{\sep}.$
\end{lemma}
\begin{proof}
Denote $v$ the valuation of $C^{\flat}$ that is normalized by $v(\widetilde{\pi})=1/e.$ We show that $v_{|_{F_{u, \tau}^{\sep}}}$ is the unique valuation of $F_{u, \tau}^{\sep}$ that extends $v_{|_{F_{u, \tau}}}.$
\medskip
If $\alpha\in F_{u, \tau}^{\sep},$ then $\alpha$ is algebraic over $F_{u, \tau}=\bigcup\limits_n k(\!(u, \eta^{1/p^n})\!).$ This implies that there exists $n\in \NN$ such that $\alpha$ is algebraic over $k(\!(u, \eta^{1/p^n})\!).$ Notice that $k(\!(u, \eta^{1/p^n})\!)$ is complete for the valuation $v$ and hence there exists a unique valuation over $k(\!(u, \eta^{1/p^n})\!)[\alpha]$ that extends $v_{|_{F_{u, \tau}}}$ ($\cf$ \cite[Chapter II, \S 4, Theorem 4.8]{Neu13}). This implies that for any $g\in \mathsf{G},$ $v\circ g=v$ over $F_{u, \tau}^{\sep}.$
\end{proof}
\begin{remark}
Notice that the action of $\mathsf{G}\simeq \Gal(F_{u, \tau}^{\sep}/F_{u, \tau})$ on $F_{u, \tau}^{\sep}$ is continuous for the discrete topology.
\end{remark}
\begin{proposition}\label{prop G}
We have \[ \mathsf{G}\simeq \mathscr{G}_L. \]
\end{proposition}
\begin{proof}
We first prove that there are injective maps $\mathscr{G}_L\to \mathsf{G} \to \mathscr{G}_{K_{\pi}}$ whose composite is the inclusion.
\medskip
Let $\alpha\in F_{u, \tau}^{\sep}$ and $P(X)\in F_{u, \tau}[X]$ its minimal polynomial of $\alpha$ over $F_{u, \tau}.$ Then for any $g\in \mathscr{G}_L,$ we have
\[ g(P(X))=P(X) \quad \Longrightarrow \quad P(g(\alpha))=g(P(\alpha))=0 \quad \Longrightarrow \quad g(\alpha)\in F_{u, \tau}^{\sep}.\] This shows that $F_{u, \tau}^{\sep}$ is stable under $\mathscr{G}_L.$ As $\mathscr{G}_L$ fixes $F_{u, \tau},$ this implies that we have a morphism of groups:
\[ \mathscr{G}_L \xrightarrow{\quad \rho \quad } \mathsf{G}. \]
Similarly, let $\alpha\in F_0^{\sep}$ and $Q(X)\in F_0[X]$ its minimal polynomial over $F_0.$ Then for any $g\in \mathsf{G},$ we have
\[ g(Q(X))=Q(X) \quad \Longrightarrow \quad Q(g(\alpha))=g(Q(\alpha))=0 \quad \Longrightarrow g(\alpha)\in F_{0}^{\sep}.\]
This shows that $F_{0}^{\sep}$ is stable under $\mathsf{G}.$ As $\mathsf{G}$ fixes $F_0,$ this implies that we have a morphism of groups:
\[ \mathsf{G} \xrightarrow{\quad \lambda \quad} \Gal(F_0^{\sep}/F_0)\simeq \mathscr{G}_{K_{\pi}}. \]
Hence finally we have the diagram:
\[ \xymatrix{ \mathscr{G}_L \ar[r]^{\rho} \ar@{_(->}[rd] & \mathsf{G} \ar[d]^{\lambda} \\
& \mathscr{G}_{K_{\pi}}. } \]
The map $\rho$ is thus injective and we are left to prove that $\lambda$ is injective. Take any $g\in \mathsf{G}$ that acts trivially on $F_0^{\sep}.$ Recall that $F_0^{\sep}$ is dense in $C^{\flat}$ for the valuation topology ($\cf$ \cite[Proof of Proposition 1.8]{Car13}) and hence also dense in $F_{u, \tau}^{\sep}.$ By lemma \ref{lemm for injectivity of lambda}, we have $g=\id_{F_{u, \tau}^{\sep}}$ and hence $\lambda$ is injective.
\medskip
Now we can see $\mathsf{G}/\rho(\mathscr{G}_L)$ as a subgroup of $\mathscr{G}_{K_{\pi}}/\mathscr{G}_L\simeq \overline{\la \gamma \ra}.$ Suppose $z\in \ZZ_p$ is such that $\gamma^{z}$ acts trivially on $F_{u, \tau},$ then we have
\[\varepsilon-1 =\gamma^{z}(\varepsilon-1) =\varepsilon^{\chi(\gamma)^z}-1,\ \ie \varepsilon^{\chi(\gamma)^z}=\varepsilon,\ \text{ thus } \gamma^{z}=\id.\]
This shows that $\rho$ is surjective, and we conclude that $\mathsf{G}\simeq \mathscr{G}_L.$
\end{proof}
\begin{corollary}\label{coro F u tau = C GL}
We have $F_{u, \tau}=(C_{u-\np}^{\flat})^{\mathscr{G}_L}.$
\end{corollary}
\begin{proof}
By definition we have $F_{u, \tau}=(C_{u-\np}^{\flat})^{\mathsf{G}},$ which is $(C_{u-\np}^{\flat})^{\mathscr{G}_L}$ by proposition \ref{prop G}.
\end{proof}
\begin{remark} (1) Recall that
$$\Oo_{\Ee}=\Big\{ \sum\limits_{i\in \ZZ} a_iu^i;\ a_i\in W(k),\ \Lim{i\to -\infty}a_i=0 \Big\}$$ is a Cohen ring for $k(\!(u)\!),$ and is equipped with the lift $\varphi$ of the Frobenius of $k(\!(u)\!),$ such that $\varphi(u)=u^p.$ It embeds in $\W(C^{\flat})$ by sending $u$ to $[\widetilde{\pi}].$ Similarly we have a Cohen ring $\Oo_{\Ff_u}$\index{$\Oo_{\Ff_u}$} for $k(\!(u, \eta^{1/p^{\infty}})\!)$ which is endowed with a Frobenius $\varphi$ and embeds in $\W(C^{\flat})$ so that
\[\Oo_{\Ee}\to \Oo_{\Ff_{u}}\to \W(C^{\flat}) \]
are compatible with Frobenius maps ($\cf$ \cite[1.3.3]{Car13}).
\item (2) Note that $F_{u,\tau}$ is stable by the action of $\tau$ on $\big(C^\flat\big)^{\mathscr{G}_L}$, because $\tau(u)=u(\eta+1)$ and $\tau(\eta^{1/p^n})=\eta^{1/p^n}$ for all $n\in\NN$. Similarly, $\mathcal{O}_{\Ff_u}$ is stable under the action of $\tau$ on $\W(C^\flat)^{\mathscr{G}_L}$.
\end{remark}
\begin{notation}\label{def F ur Frob}
Let $\Ff_u=\Frac(\Oo_{\Ff_u})$\index{$\Ff_u$} and $\Ff^{\ur}_u$\index{$\Ff^{\ur}_u$} the maximal unramified extension in $\W(C^{\flat})[1/p]$ and $\Oo_{\Ff^{\ur}_u}$\index{$\Oo_{\Ff^{\ur}_u}$} its ring of integers. We denote $\Oo_{\widehat{\Ff^{\ur}_{u}}}$\index{$\Oo_{\widehat{\Ff^{\ur}_{u}}}$} its $p$-adic completion and put $\widehat{\Ff^{\ur}_{u}}=\Oo_{\widehat{\Ff^{\ur}_{u}}}[1/p].$\index{$\widehat{\Ff^{\ur}_{u}}$}
The ring $\Oo_{\Ff^{\ur}_{u}}$ is endowed with a Frobenius map that is compatible with the Frobenius map in $\W(C^{\flat})$ by our construction above. By continuity, it extends into a Frobenius map on $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ and $\widehat{\Ff^{\ur}_{u}}.$
We put
\[\Oo_{\Ee_{u,\tau}}:=(\Oo_{\widehat{\Ff^{\ur}_{u}}})^{\mathscr{G}_L},\quad \Ee_{u,\tau}=\Oo_{\Ee_{u,\tau}}[1/p]. \]\index{$\Oo_{\Ee_{u,\tau}}$}
\end{notation}
\begin{remark}
We summarize the notations by the following diagram:
\[ \xymatrix{
\W(C^{\flat}) \ar@{->>}[r] & C^{\flat}\ar@{-}[d]\\
\Oo_{\Ff^{\ur}_u}\ar[u]\ar@{->>}[r]& F_{u,\tau}^{\sep}\ar@{-}[d]\\
\Oo_{\Ff_{u}} \ar@{->>}[r]\ar[u]& F_{u,\tau}\\
\ZZ_p\ar[u] \ar@{->>}[r]& \FF_p.\ar[u]
} \]
\end{remark}
\begin{theorem}\label{thm notation of Caruso}
We have $\Oo_{\Ff_u}=\Oo_{\Ee_{u, \tau}}.$
\end{theorem}
\begin{proof}
We have the following diagram:
\[ \xymatrix{
& \Oo_{\widehat{\Ff_u^{\ur}}} &\\
\Oo_{\Ff_u}\ar@{-}[ru] \ar@{->>}[d]&& \Oo_{\Ee_{u, \tau}} \ar@{-}[lu] \ar@{->>}[d]\\
F_{u,\tau}\ar@{=}[rr]^{\text{Proposition }\ref{prop G}}&&(F_{u,\tau}^{\sep})^{\mathscr{G}_L}.
} \]
Notice that $\Oo_{\Ff_u}$ is fixed by $\mathscr{G}_L,$ hence $\Oo_{\Ff_u}\subset \Oo_{\Ee_{u, \tau}}.$ Both $\Oo_{\Ff_u}$ and $\Oo_{\Ee_{u, \tau}}$ have the same residue field $F_{u, \tau}.$ Indeed, $\Oo_{\Ee_{u, \tau}}$ has residue field $(F_{u,\tau}^{\sep})^{\mathscr{G}_L},$ which is $F_{u,\tau}$ by proposition \ref{prop G}. Hence $\Oo_{\Ff_u}\subset \Oo_{\Ee_{u, \tau}}$ are two Cohen rings for $F_{u, \tau}$ and they must be equal as $\Oo_{\Ff_u}$ is dense and closed inside $\Oo_{\Ee_{u, \tau}}$ for the $p$-adic topology.
\end{proof}
\begin{remark}
The theorem \ref{thm notation of Caruso} can be rewritten as $\Ff^{\mathsf{int}}_{u-\mathsf{np}}=\Ee_{u-\mathsf{np}, \tau}^{\mathsf{int}}$ with Caruso's notations in \cite[1.3.3]{Car13}.
\end{remark}
\begin{definition}\label{def phi tau Fp u}
A \emph{$(\varphi, \tau)$-module over $(F_0, F_{u,\tau})$} is the data:
\item (1) an \'etale $\varphi$-module $D$ over $F_0$;
\item (2) a $\tau$-semi-linear endomorphism $\tau_D$ over $D_{u, \tau}:= F_{u,\tau} \otimes_{F_0} D$ which commutes with $\varphi_{F_{u,\tau}}\otimes \varphi_D$ (where $\varphi_{F_{u,\tau}}$ is the Frobenius map on $F_{u,\tau}$ and $\varphi_D$ the Frobenius map on $D$) such that
\[ \big(\forall x \in D\big)\ (g\otimes 1)\circ \tau_D (x) = \tau_D^{\chi(g)}(x), \]
for all $g\in \mathscr{G}_{K_{\pi}}/\mathscr{G}_L$ such that $\chi(g)\in \NN.$\\
We denote $\Mod_{F_0,F_{u,\tau}}(\varphi,\tau)$\index{$\Mod_{F_0,F_{u,\tau}}(\varphi,\tau)$} the corresponding category.
\end{definition}
\begin{theorem}\label{thm cat equi Fp u}
The functors
\begin{align*}
\Rep_{\FF_p}(\mathscr{G}_K) &\to \Mod_{F_0,F_{u,\tau}}(\varphi,\tau)\\
T &\mapsto \mathcal{D}(T)= (F_0^{\sep}\otimes_{\FF_p} T)^{\mathscr{G}_{K_{\pi}}}\\
\mathcal{T}(D)=(F_0^{\sep}\otimes_{F_0} D)^{\varphi=1} &\mapsfrom D
\end{align*}
establish quasi-inverse equivalences of categories, where the $\tau$-semilinear endomorphism $\tau_D$ over $\mathcal{D}(T)_{u,\tau}:=F_{u,\tau}\otimes_{F_0} \mathcal{D}(T)$ is induced by $\tau\otimes\tau$ on $C_{u-\np}^\flat\otimes T$ using the following lemma \ref{lemm tau action natural}.
\end{theorem}
\begin{proof}
$\cf$ \cite[Th\'eor\`eme 1.14]{Car13}.
\end{proof}
\begin{lemma}\label{lemm tau action natural}
The natural map $F_{u,\tau} \otimes_{F_0} \mathcal{D}(T)\to (C^{\flat}_{u-\np}\otimes T)^{\mathscr{G}_L}$ is an isomorphism.
\end{lemma}
\begin{proof}
$\cf$ \cite[Lemma 1.12]{Car13}.
\end{proof}
More generally, we have the integral analogue of theorem \ref{thm cat equi Fp u}.
\begin{definition}\label{def phi tau over O e Oe u tau}
A \emph{$(\varphi, \tau)$-module over $(\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}})$} is the data:
\item (1) an \'etale $\varphi$-module $D$ over $\Oo_{\Ee}$;
\item (2) a $\tau$-semi-linear endomorphism $\tau_D$ over $D_{u, \tau}:=\mathcal{O}_{\mathcal{E}_{u,\tau}} \otimes_{\Oo_{\Ee}} D$ which commutes with $\varphi_{\mathcal{O}_{\mathcal{E}_{u,\tau}}}\otimes \varphi_D$ (where $\varphi_{\mathcal{O}_{\mathcal{E}_{u,\tau}}}$ is the Frobenius map on $\mathcal{O}_{\mathcal{E}_{u,\tau}}$ and $\varphi_D$ the Frobenius map on $D$) such that
\[ (\forall x \in D)\ (g\otimes 1)\circ \tau_D (x) = \tau_D^{\chi(g)}(x), \]
for all $g\in \mathscr{G}_{K_{\pi}}/\mathscr{G}_L$ such that $\chi(g)\in \NN.$ \\
We denote $\Mod_{\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}}}(\varphi,\tau)$\index{$\Mod_{\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}}}(\varphi,\tau)$} the corresponding category.
\end{definition}
\begin{theorem}
The functors
\begin{align*}
\Rep_{\ZZ_p}(\mathscr{G}_K) &\to \Mod_{\Oo_{\Ee},\Oo_{\Ee_{u,\tau}}}(\varphi,\tau)\\
T &\mapsto \mathcal{D}(T)= (\Oo_{\widehat{\Ee^{\ur}}}\otimes_{\ZZ_p} T)^{\mathscr{G}_{K_{\pi}}}\\
\mathcal{T}(D)=(\Oo_{\widehat{\Ee^{\ur}}}\otimes_{\Oo_{\Ee}} D)^{\varphi=1} &\mapsfrom D
\end{align*}
establish quasi-inverse equivalences of categories, where the $\tau$-semilinear endomorphism $\tau_D$ over $\mathcal{D}(T)_{u,\tau}:=(\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p} T)^{\mathscr{G}_L}\simeq\Oo_{\Ee_{u,\tau}}\otimes\mathcal{D}(T)$ is induced by $\tau\otimes\tau$ on $\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p} T.$
\end{theorem}
\begin{remark}\label{rem 3.1.16 e e u tau}
For $V\in \Rep_{\QQ_p}(\mathscr{G}_K),$ we can similarly define $\Mod_{\Ee, \mathcal{E}_{u,\tau}}(\varphi,\tau)$\index{$\Mod_{\Ee, \mathcal{E}_{u,\tau}}(\varphi,\tau)$}: the category of $(\varphi, \tau)$-modules over $(\Ee, \mathcal{E}_{u,\tau})$ and establish an equivalence of category.
\end{remark}
\section{The complex \texorpdfstring{$\Cc_{\varphi, \tau}^{u}$}{C\textpinferior\texthinferior\textiinferior, \texttinferior\textainferior\textuinferior\unichar{"1D58}}}\label{section C varphi tau u}
\begin{notation}\label{not D u tau normla version at the beginning}
Let $(D, D_{u,\tau})\in \Mod_{\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}}}(\varphi,\tau),$ we put
\[D_{u, \tau, 0}:=\big\{ x\in D_{u, \tau};\ (\forall g\in \mathscr{G}_{K_{\pi}})\ \chi(g)\in \ZZ_{>0} \Rightarrow (g\otimes 1)(x)=x+\tau_D(x)+\tau_D^{2}(x)+\cdots +\tau_D^{\chi(g)-1}(x) \big\}. \]\index{$D_{u, \tau, 0}$}
By similar arguments as that of lemma \ref{Replace g witi gamma}, we have
\[D_{u,\tau, 0}=\big\{ x\in D_{u,\tau} ;\ (\gamma\otimes 1)x=(1+\tau_D+\tau_D^{2}+\cdots + \tau_D^{\chi(\gamma)-1})(x) \big\}.\]
\end{notation}
\begin{lemma}\label{lemm complex u well defined}
Let $(D, D_{u,\tau})\in \Mod_{\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}}}(\varphi,\tau),$ then $\varphi-1$ and $\tau_D-1$ induce maps $\varphi-1: D_{u, \tau, 0}\to D_{u, \tau, 0}$ and $\tau_D-1: D\to D_{u, \tau, 0}.$
\end{lemma}
\begin{proof}
$\cf$ lemma \ref{lemm 1.1.11 well-defined tau-1 varphi-1}.
\end{proof}
\begin{definition}\label{def complex varphi tau}
Let $(D, D_{\tau})\in \Mod_{\Oo_{\Ee}, \mathcal{O}_{\mathcal{E}_{u,\tau}}}(\varphi,\tau)$ (resp. $(D, D_{\tau})\in \Mod_{\Ee, \mathcal{E}_{u,\tau}}(\varphi,\tau)$). We define a complex $\Cc_{\varphi, \tau}^{u}(D)$\index{$\Cc_{\varphi, \tau}^{u}(D)$} as follows:
\[ \xymatrix{ 0 \ar[rr] && D \ar[r] & D\oplus D_{u,\tau, 0} \ar[r] & D_{u,\tau, 0} \ar[r] & 0\\
&&x \ar@{|->}[r] & ((\varphi-1)(x), (\tau_D-1)(x)) &&\\
&& & (y,z) \ar@{|->}[r]& (\tau_D-1)(y)-(\varphi-1)(z).&
}
\]
If $T\in \Rep_{\ZZ_p}(\mathscr{G}_K)$ (resp. $V\in \Rep_{\QQ_p}(\mathscr{G}_K)$), we have in particular the complex $\Cc_{\varphi, \tau}^{u}(\mathcal{D}(T))$ (resp. $\Cc_{\varphi, \tau}^{u}(\mathcal{D}(V))$), which will also be simply denoted $\Cc_{\varphi, \tau}^{u}(T)$\index{$\Cc_{\varphi, \tau}^{u}(T)$} (resp. $\Cc_{\varphi, \tau}^{u}(V)$).
\end{definition}
\begin{theorem}\label{coro complex over non-perfect ring works also Zp-case}
Let $T\in \Rep_{\ZZ_p}(\mathscr{G}_K),$ then the complex $\Cc_{\varphi, \tau}^{u}(T)$ computes the continuous Galois cohomology of $T,$ $\ie \H^i(\mathscr{G}_K,\Cc_{\varphi, \tau}^{u}(T))\simeq \H^i(\mathscr{G}_K, T)$ for $i\in \NN.$
\end{theorem}
\begin{remark}\label{rem diagram non-perfect complex} (1) Let $T\in \Rep_{\ZZ_p}(\mathscr{G}_K)$ and $(D, D_{u,\tau})$ be its $(\varphi, \tau)$-module over $(\Oo_{\Ee},\Oo_{\Ee_{u,\tau}}).$ We have the following diagram of complexes
\[\xymatrix{
\Cc_{\varphi, \tau}^{u}(D)& 0 \ar[r] & D \ar@{=}[d]\ar[rr]^-{(\varphi-1, \tau_D-1)}&& D \oplus D_{u,\tau, 0} \ar@{^(->}[d]\ar[rr]^-{ (\tau_D-1)\ominus(\varphi-1)}&& D_{u,\tau,0} \ar@{^(->}[d]\ar[r] & 0\\
\Cc_{\varphi, \tau}(D)& 0 \ar[r] & D \ar[d]\ar[rr]^-{(\varphi-1, \tau_D-1)}&& D \oplus D_{\tau,0} \ar@{->>}[d]\ar[rr]^-{(\tau_D-1)\ominus(\varphi-1)}&& D_{\tau,0} \ar@{->>}[d]\ar[r] & 0\\
\Cc(D) & 0 \ar[r] & 0 \ar[rr]^{}&& D_{\tau,0}/D_{u,\tau, 0} \ar[rr]^-{\varphi-1}&& D_{\tau, 0}/D_{u,\tau, 0} \ar[r] & 0.\\
}\]
\item (2) Recall that for $T\in \Rep_{\ZZ_p}(\mathscr{G}_K),$ the complex $\Cc_{\varphi, \tau}(D)$ computes the continuous Galois cohomology by theorem \ref{thm main result}. We will show in the following that $\Cc_{\varphi, \tau}^{u}(D)$ and $\Cc_{\varphi, \tau}(D)$ are quasi-isomorphic, and hence the complex $\Cc_{\varphi, \tau}^{u}(D)$ also computes the continuous Galois cohomology.
\end{remark}
\subsection{Proof of theorem \ref{coro complex over non-perfect ring works also Zp-case}: the quasi-isomorphism}
Let $T\in \Rep_{\ZZ_p}(\mathscr{G}_K),$ and denote by $(D, D_{u, \tau})$ its $(\varphi, \tau)$-module over $(\Oo_{\Ee}, \Oo_{\Ee_{u,\tau}}).$
\begin{lemma}\label{lemm D tau D u tau inj}
The map $D_{\tau}/D_{u,\tau}\xrightarrow{\varphi-1}D_{\tau}/D_{u,\tau}$ is injective.
\end{lemma}
\begin{proof}
For any $x\in D_{\tau}=(\W(C^{\flat})\otimes_{\ZZ_p}T)^{\mathscr{G}_L}$ such that $(\varphi-1)x\in D_{u,\tau}=(\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p} T)^{\mathscr{G}_L},$ we have to show that $x\in D_{u,\tau}.$ Obviously it suffices to show that for any element $x\in \W(C^{\flat})\otimes_{\ZZ_p}T,$ the relation $(\varphi-1)x\in \Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p} T$ implies $x\in \Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p} T.$ By d\'evissage we can reduce to the case $pT=0.$ For any $x\in C^{\flat}\otimes_{\FF_p} T,$ we claim that $(\varphi-1)x\in k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}\otimes_{\FF_p}T$ implies $x\in k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}\otimes_{\FF_p}T.$ We have
$C^{\flat}\otimes T\simeq (C^{\flat})^d$ and
$$k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}\otimes T\simeq (k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep})^d$$ as $\varphi$-modules. Hence it suffices to show that for any $x\in C^{\flat},$ $x^p-x\in k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}$ will imply that $x\in k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}.$ Put $P(X)=X^p-X-(x^p-x)\in k(\!(u, \eta^{1/p^{\infty}})\!)[X]:$ it is separable as $P^{\prime}(X)=-1$ so that $x$ is separable over $k(\!( u, \eta^{1/p^{\infty}})\!)$ and hence $x\in k(\!( u, \eta^{1/p^{\infty}})\!)^{\sep}.$
\end{proof}
\begin{lemma}\label{lemm 3.2.8 assume T is killed by p} Assume $T$ is killed by $p,$ then there are exact sequences of abelian groups
\begin{align*}
0&\to T^{\mathscr{G}_L} \to D_{u, \tau} \xrightarrow{\varphi-1} D_{u, \tau} \to \H^1(\mathscr{G}_L, T) \to 0\\
0&\to T^{\mathscr{G}_L} \to D_{\tau} \xrightarrow{\varphi-1} D_{\tau} \to \H^1(\mathscr{G}_L, T) \to 0.
\end{align*}
\end{lemma}
\begin{proof}
The sequence of $\mathscr{G}_L$-modules
\[ 0\to \FF_p \to k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep} \xrightarrow{\varphi-1} k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep} \to 0 \] and
\[ 0\to \FF_p \to C^{\flat} \xrightarrow{\varphi-1} C^{\flat} \to 0 \] are exact (here we endow $k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep}$ with the discrete topology and $C^{\flat}$ with its valuation topology). Tensoring with $T$ and taking continuous cohomology (the first for the discrete topology, the second for valuation topology) gives exact sequences:
\begin{align*}
0&\to T^{\mathscr{G}_L} \to D_{u, \tau} \xrightarrow{\varphi-1} D_{u, \tau} \to \H^1(\mathscr{G}_L, T) \to \H^1(\mathscr{G}_L, k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep}\otimes_{\FF_p} T) \\
0&\to T^{\mathscr{G}_L} \to D_{\tau} \xrightarrow{\varphi-1} D_{\tau} \to \H^1(\mathscr{G}_L, T) \to \H^1(\mathscr{G}_L, C^{\flat}\otimes_{\FF_p} T).
\end{align*}
The lemma follows from the vanishing of $\H^1(\mathscr{G}_L, k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep}\otimes_{\FF_p} T)$ (by Hilbert 90) and the vanishing of $\H^1(\mathscr{G}_L, C^{\flat}\otimes_{\FF_p} T)$ (that follows from the fact that $\widehat{L}$ is perfectoid.)
\end{proof}
\begin{lemm}\label{lemmcoho0 Zp tor non-perfect case}
If $T\in\Rep_{\ZZ_p, \tors}(\mathscr{G}_K),$ then
\[\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T)=0\]
where $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ is endowed with the $p$-adic topology.
\end{lemm}
\begin{proof}
By d\'evissage we may assume that $T$ is killed by $p,$ in which case this reduces to the vanishing of $\H^1(\mathscr{G}_L, C_{u-\np}^{\flat}\otimes_{\FF_p}T),$ which follows from Hilbert 90 since $\mathscr{G}_L$ acts continuously on $C_{u-\np}^{\flat}=k(\!(u, \eta^{1/p^{\infty}})\!)^{\sep}$ (endowed with the discrete topology).
\end{proof}
\begin{corollary}\label{lemmcoho0 Zp non-perfect case}
If $T\in\Rep_{\ZZ_p}(\mathscr{G}_K),$ then
\begin{align*}
\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T)=0,
\end{align*}
where $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ is endowed with the $p$-adic topology.
\end{corollary}
\begin{proof}
Denote $\Oo_{\widehat{\Ff^{\ur}_{u}}, n}:=\Oo_{\widehat{\Ff^{\ur}_{u}}}/p^n.$
By \cite[Theorem 2.3.4]{NSW13}, we have the exact sequence
$$0\to \R^1\pLim{n}\H^{0}(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}},n}\otimes_{\ZZ_p}T)\to
\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T)\to\pLim{n} \H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}},n}\otimes_{\ZZ_p}T).$$
We have $\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}},n}\otimes_{\ZZ_p}T)=0$ by lemma \ref{lemmcoho0 Zp tor non-perfect case} and $\R^1\pLim{n}\H^{0}(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}},n}\otimes_{\ZZ_p}T)=0$ from the observation that $\{\H^0(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}},n}\otimes_{\ZZ_p}T)\}_n$ has the Mittag-Leffler property. Hence we get $\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T)=0.$
\end{proof}
\begin{coro}\label{Exactness D-tau-0 Zp non-perfect case}\label{coro D u tau Zp case}
If $0\to T^\prime\to T\to T^{\prime\prime}\to0$ is an exact sequence in $\Rep_{\ZZ_p}(\mathscr{G}_K),$ then the sequence
$$0\to\mathcal{D}(T^\prime)_{u,\tau}\to\mathcal{D}(T)_{u,\tau}\to\mathcal{D}(T^{\prime\prime})_{u,\tau}\to 0$$
is exact.
\end{coro}
\begin{proof}
As $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ is torsion-free, we have the exact sequence
$$0\to\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T^\prime\to\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T\to\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T^{\prime\prime}\to0$$
which induces the exact sequences
$$0\to\mathcal{D}(T^\prime)_{u,\tau}\to\mathcal{D}(T)_{u,\tau}\to\mathcal{D}(T^{\prime\prime})_{u,\tau}\to\H^1(\mathscr{G}_L,\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T^\prime).$$
By corollary \ref{lemmcoho0 Zp non-perfect case}, we get the exact sequence.
\end{proof}
\begin{proposition}
The map $D_{\tau}/D_{u,\tau}\xrightarrow{\varphi-1}D_{\tau}/D_{u,\tau}$ is bijective.
\end{proposition}
\begin{proof}
\begin{enumerate}
\item [(1)] We first assume $T$ is killed by $p.$
By lemma \ref{lemm D tau D u tau inj}, it suffices to show the surjectivity.
By lemma \ref{lemm 3.2.8 assume T is killed by p}, the inclusion $k(\!( u, \eta^{1/p^{\infty}} )\!)^{\sep}\subset C^{\flat}$ induces a commutative diagram:
\[
\xymatrix{
0 \ar[r] & T^{\mathscr{G}_L} \ar[r] \ar@{=}[d]& D_{u, \tau} \ar[r]^{\varphi-1} \ar[d]& D_{u, \tau} \ar[r] \ar[d]& \H^1(\mathscr{G}_L, T) \ar[r]\ar@{=}[d] & 0\\
0 \ar[r] & T^{\mathscr{G}_L} \ar[r] & D_{\tau} \ar[r]^{\varphi-1} & D_{\tau} \ar[r] & \H^1(\mathscr{G}_L, T) \ar[r] & 0.
}
\]
If $y\in D_{\tau},$ there exists $z\in D_{u, \tau}$ having the same image in $\H^1(\mathscr{G}_L, T):$ hence $y-z$ maps to 0 in $\H^1(\mathscr{G}_L, T),$ so there exists $x\in D_{\tau}$ such that $(\varphi-1)x=y-z$ and thus $(\varphi-1)(x+D_{u, \tau})=y+D_{u, \tau}.$ This finishes the proof.
\item [(2)] We now use d\'evissage for $\ZZ_p$-representations. Notice that $\mathcal{D}(T/p^n)=\mathcal{D}(T)/p^n$ and $T, \mathcal{D}(T)$ are both $p$-adically complete. Hence it suffices to deal with the case where $T$ is killed by $p^n$ with $n\in \NN_{\geq0}.$ We use induction over $n.$ Suppose $T$ is killed by $p^n$, we put $T^{'}=pT, T^{''}=T/pT$ and consider the following exact sequence
\[ 0 \to T^{'} \to T \to T^{''} \to 0 \]
in $\Rep_{\ZZ_p}(\mathscr{G}_K).$ Then we have the following diagram of exact sequences by corollary \ref{coro D u tau Zp case}
\[ \xymatrix{ 0 \ar[r] & \mathcal{D}(T^{\prime})_{u, \tau} \ar[r] \ar@{^(->}[d]& \mathcal{D}(T)_{u, \tau} \ar[r] \ar@{^(->}[d]& \mathcal{D}(T^{\prime\prime})_{u, \tau} \ar@{^(->}[d] \ar[r]& 0 \\
0 \ar[r] & \mathcal{D}(T^{\prime})_{\tau}\ar[r] & \mathcal{D}(T)_{\tau} \ar[r] & \mathcal{D}(T^{\prime\prime})_{\tau} \ar[r] & 0.\\
} \]
By snake lemma we have the exact sequence
\[ 0 \to \mathcal{D}(T^{\prime})_{\tau}/\mathcal{D}(T^{\prime})_{u, \tau} \to \mathcal{D}(T)_{\tau}/\mathcal{D}(T)_{u,\tau} \to \mathcal{D}(T^{\prime\prime})_{\tau}/ \mathcal{D}(T^{\prime\prime})_{u, \tau} \to 0 \]
and we consider the following diagram
\[ \xymatrix{ 0 \ar[r] & \mathcal{D}(T^{\prime})_{\tau}/\mathcal{D}(T^{\prime})_{u, \tau} \ar[r] \ar[d]_{\varphi-1}& \mathcal{D}(T)_{\tau}/\mathcal{D}(T)_{u,\tau} \ar[r] \ar[d]_{\varphi-1}& \mathcal{D}(T^{\prime\prime})_{\tau}/ \mathcal{D}(T^{\prime\prime})_{u, \tau} \ar[d]_{\varphi-1} \ar[r]& 0 \\
0 \ar[r] & \mathcal{D}(T^{\prime})_{\tau}/\mathcal{D}(T^{\prime})_{u, \tau} \ar[r]& \mathcal{D}(T)_{\tau}/\mathcal{D}(T)_{u,\tau} \ar[r]& \mathcal{D}(T^{\prime\prime})_{\tau}/ \mathcal{D}(T^{\prime\prime})_{u, \tau} \ar[r] & 0.
} \]
Notice that the first and the third vertical maps are isomorphisms by induction hypothesis, hence the middle one is an isomorphism. We then conclude by passing to the limit.
\end{enumerate}
\end{proof}
\begin{corollary}\label{coro prepare complex over non-perfect ring works also}
The map $D_{\tau,0}/D_{u,\tau,0}\xrightarrow{\varphi-1}D_{\tau,0}/D_{u,\tau,0}$ is bijective.
\end{corollary}
\begin{proof}
Consider the following morphism of short exact sequences:
\[
\xymatrix{
& D_{\tau, 0}/D_{u, \tau, 0} \ar[r]^{\varphi-1} \ar@{_(->}[d] & D_{\tau, 0}/D_{u, \tau, 0} \ar@{_(->}[d] &\\
0 \ar[r] & D_{\tau}/D_{u, \tau} \ar[r]^{\varphi-1} \ar@{->>}[d]^{\delta-\gamma} & D_{\tau}/D_{u, \tau} \ar[r] \ar@{->>}[d]^{\delta-\gamma} & 0\\
0 \ar[r] & D_{\tau}/D_{u, \tau} \ar[r]^{\varphi-1} & D_{\tau}/D_{u, \tau} \ar[r] & 0.
}
\]
We then have the claimed isomorphism using the snake lemma.
\end{proof}
\begin{remark}
By remark \ref{rem diagram non-perfect complex}, this finishes the proof of theorem \ref{coro complex over non-perfect ring works also Zp-case}.
\end{remark}
\section{The complex \texorpdfstring{$\Cc_{\psi,\tau}^{u}$}{C\textpinferior\textsinferior\textiinferior,\texttinferior\textainferior\textuinferior\unichar{"1D58}}}
In this section we will define a $\psi$ operator for $(\varphi, \tau)$-modules over $\Oo_{\widehat{\Ff^{\ur}_{u}}},$ and then construct a complex $\Cc_{\psi, \tau}^{u}.$ At the end of this section, we will show that this complex $\Cc_{\psi, \tau}^{u}$ computes the continuous Galois cohomology: we will first prove the case of $\FF_p$-representations and then pass to the case of $\ZZ_p$-representations by d\'evissage.\\
Recall that we have defined $\Oo_{\Ff^{\ur}_{u}}$ in definition \ref{def F ur Frob} and denote $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ its $p$-adic completion. The ring $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ has a Frobenius map that lifts that of the residue field $k(\!(u, \eta^{1/p^{\infty}})\!)^{\sep}$ ($\cf$ \cite[Theorem 29.2]{Mat89}).
\begin{lemma}\label{lemm Fp-version degree of sep clo}
Let $M$ be a field of characteristic $p,$ such that $[M:\varphi(M)]<\infty,$ then $\varphi(M^{\sep})\otimes_{\varphi(M)}M\simeq M^{\sep}$ and in particular $[M^{\sep}: \varphi(M^{\sep})]=[M:\varphi(M)].$
\end{lemma}
\begin{proof}
It suffices to show that for any separable and algebraic extension $L/M$ (we pass to inductive limit for the case of separable extension) we have an isomorphism $M\otimes_{\varphi(M)}\varphi(L)\simeq L.$ By \cite[Theorem 26.4]{Mat89}, the natural map $M\otimes_{\varphi(M)}\varphi(L)\to M\varphi(L)$ is an isomorphism. This implies that $[M\varphi(L):M]=[\varphi(L):\varphi(M)]=[L:M].$ As $M\subset M\varphi(L)\subset L,$ then $[L:M]=[L:M\varphi(L)][M\varphi(L):M]$ and hence $[L:M\varphi(L)]=1,$ $\ie$ $M\varphi(L)=L.$
\end{proof}
\begin{corollary}\label{coro Fp case degree p}
The extension $C_{u-\np}^\flat/\varphi(C_{u-\np}^\flat)$ has degree $p.$
\end{corollary}
\begin{proof}
By definition $C_{u-\np}^{\flat}=k(\!(u, \eta^{1/p^{\infty}})\!)^{\sep},$ and we apply lemma \ref{lemm Fp-version degree of sep clo}.
\end{proof}
\begin{lemma}\label{lemm Zp-version degree of sep clo}
The extension $\widehat{\Ff^{\ur}_{u}}/\varphi(\widehat{\Ff^{\ur}_{u}})$ has degree $p.$
\end{lemma}
\begin{proof}
We have the diagram
\[ \xymatrix{
\Oo_{\widehat{\Ff^{\ur}_{u}}} \ar@{->>}[r] \ar[d]^{\varphi}& C_{u-\np}^\flat \ar[d]^{\varphi}\\
\Oo_{\widehat{\Ff^{\ur}_{u}}} \ar@{->>}[r]& C_{u-\np}^\flat
} \]
with $[C_{u-\np}^\flat: \varphi(C_{u-\np}^\flat)]=p.$ Hence there exist $a_1, \dots, a_p\in \Oo_{\widehat{\Ff^{\ur}_{u}}}$ such that whose image $(\overline{a_1}, \dots, \overline{a_p})$ modulo $p$ forms a basis of $C_{u-\np}^\flat$ over $\varphi(C_{u-\np}^\flat).$ The following map is surjective
\begin{align*}
\rho\colon \big( \varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})\big)^{p} &\to \Oo_{\widehat{\Ff^{\ur}_{u}}}\\
(\lambda_1, \dots, \lambda_p) & \mapsto \sum\limits_{i=1}^{p}\lambda_i a_i.
\end{align*}
Indeed, for any $a\in \Oo_{\widehat{\Ff^{\ur}_{u}}}$ there exists $(\lambda_1^{(1)}, \dots, \lambda_p^{(1)})\in \big( \varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})\big)^{p}$ such that
\[ a-\sum\limits_{i=1}^{p} \lambda_i^{(1)}a_i \in p\Oo_{\widehat{\Ff^{\ur}_{u}}}. \]
Hence there exists $(\lambda_1^{(2)}, \dots, \lambda_p^{(2)})\in \big( \varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})\big)^{p}$ such that
\[ a-\sum\limits_{i=1}^{p} \lambda_i^{(1)}a_i-p\sum\limits_{i=1}^{p} \lambda_i^{(2)}a_i \in p^2 \Oo_{\widehat{\Ff^{\ur}_{u}}}. \]
By induction, for any $n\in \NN$ there exists $(\lambda_1^{(n)}, \dots, \lambda_p^{(n)})\in \big( \varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})\big)^{p}$ such that
\[ a-\sum\limits_{i=1}^{p} \bigg(\sum\limits_{j=1}^n p^{j-1} \lambda_i^{(j)}\bigg)a_i \in p^{n}\Oo_{\widehat{\Ff^{\ur}_{u}}}. \]
As $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ is $p$-adically complete, we have
\[ a= \sum\limits_{i=1}^{p} \bigg(\sum\limits_{j=1}^{\infty} p^{j-1} \lambda_i^{(j)}\bigg)a_i. \]
Notice that $\sum\limits_{j=1}^{\infty} p^{j-1} \lambda_i^{(j)}\in \varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})$ and hence we proved the surjectivity of the map $\rho$ defined above. Now $\Oo_{\widehat{\Ff^{\ur}_{u}}}$ is a $\varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})$-module of finite type and we can apply Nakayama's lemma: the map is an isomorphism since it is so modulo $p$ by corollary \ref{coro Fp case degree p}.
\end{proof}
\begin{definition}\label{def psi operator}
For any $x\in \widehat{\Ff^{\ur}_{u}},$ we put
\[ \psi(x)=\frac{1}{p}\varphi^{-1}(\Tr_{\widehat{\Ff^{\ur}_{u}}/\varphi(\widehat{\Ff^{\ur}_{u}})}(x)). \]\index{$\psi$}
In particular we have $\psi\circ \varphi=\id_{\widehat{\Ff^{\ur}_{u}}}$ on $\widehat{\Ff^{\ur}_{u}}.$ Applying lemma \ref{lemm Zp-version degree of sep clo} to $\widehat{\Ee^{\ur}},$ we see that the operator $\psi$ induces an operator $\psi\colon \widehat{\Ee^{\ur}}\to \widehat{\Ee^{\ur}}.$ Note that $\psi(\Oo_{\widehat{\Ff^{\ur}_{u}}})\subset \Oo_{\widehat{\Ff^{\ur}_{u}}}$ and $\psi(\Oo_{\widehat{\Ee^{\ur}}})\subset \Oo_{\widehat{\Ee^{\ur}}}.$
\end{definition}
\begin{remark}\label{rem Zp-case psi commutes with g}
We have $\psi\circ g=g\circ\psi$ for all $g\in\mathscr{G}_K.$ Indeed, we have the following commutative square
\[ \begin{tikzcd}
\widehat{\Ff^{\ur}_{u}} \arrow[r, "g"] \arrow[dd, "\varphi","\simeq"'] & \widehat{\Ff^{\ur}_{u}}\arrow[dd, "\varphi"',"\simeq"] \\
\\
\varphi(\widehat{\Ff^{\ur}_{u}}) \arrow[r, "g"] \arrow[uu, bend left, "\varphi^{-1}"]& \varphi(\widehat{\Ff^{\ur}_{u}}). \arrow[uu, bend right, "\varphi^{-1}"']
\end{tikzcd} \]
This implies $\varphi^{-1}$ commutes with $g\in \mathscr{G}_K$ over $\varphi(\widehat{\Ff^{\ur}_{u}}).$ As $\Tr_{\widehat{\Ff^{\ur}_{u}}/\varphi(\widehat{\Ff^{\ur}_{u}})}\colon \widehat{\Ff^{\ur}_{u}} \to \varphi(\widehat{\Ff^{\ur}_{u}})$ commutes with $g\in \mathscr{G}_K,$ so does $\psi$ on $\widehat{\Ff^{\ur}_{u}}.$
\end{remark}
\begin{proposition}
\label{prop Zp-case psi surjective}
Let $(D, D_{\tau})\in \Mod_{\Oo_{\Ee}, \Oo_{\Ee_{\tau}}}(\varphi, \tau).$ There exists a unique additive map
\[ \psi_D\colon D\to D, \]
satisfying
\item (1) \[ (\forall a\in \Oo_{\Ee}) ( \forall x\in D )\quad \psi_D(a \varphi_D (x))=\psi_{\Oo_{\Ee}}(a)x, \]
\item (2) \[ (\forall a\in \Oo_{\Ee}) (\forall x\in D) \quad \psi_D(\varphi_{\Oo_{\Ee}}(a)x)=a\psi_D(x). \]
This map is surjective and satisfies $\psi_D \circ \varphi_D =\id_D$.\\
There also exists a unique additive map $\psi_{D_{u,\tau}}: D_{u,\tau} \to D_{u,\tau}$ that satisfies similar conditions as above and extends the additive map $\psi_D$.
\end{proposition}
\begin{proof} Let $T\in \Rep_{\ZZ_p}(\mathscr{G}_K)$ be such that $D=\mathcal{D}(T).$ We have defined $\psi$ on $\widehat{\Ff^{\ur}_{u}},$ hence it is defined over $\Ee_{u,\tau}=(\widehat{\Ff^{\ur}_{u}})^{\mathscr{G}_L}.$ The operator $\psi\otimes 1$ on $\Oo_{\widehat{\Ee^{\ur}}}\otimes_{\ZZ_p}T$ and $\Oo_{\widehat{\Ff^{\ur}_{u}}}\otimes_{\ZZ_p}T$ induces operators $\psi$ on $D$ and $D_{u,\tau}.$ One easily verifies the above properties: this shows the existence. The unicity follows from the fact that $D$ is \'etale, $\ie \varphi(D)$ generates $D$ as an $\Oo_{\Ee}$-module.
\end{proof}
\begin{remark}\label{rem why we need unperfected version} (1) Let $(D, D_{\tau})\in \Mod_{\Oo_{\Ee}, \Oo_{\Ee_{\tau}}}(\varphi, \tau).$ Suppose there exists an operator $\psi_{D_{\tau}}$ over $D_{\tau}$ that extends $\psi_D$ and $\psi_{D_{\tau}}\circ \varphi_{D_{\tau}}=\id_{D_{\tau}}$, then $\varphi_{D_{\tau}}$ being bijective over $D_{\tau}$ (as $D$ is \'etale and $\varphi_{\Oo_{\Ee_{\tau}}}$ is bijective over $\Oo_{\Ee_{\tau}}$) will imply that $\psi_{D_{\tau}}$ is bijective, which contradicts the fact $\psi_D$ is not injective.
\item (2) When there is no confusion, we will simply denote $\psi$ instead of $\psi_{\Oo_{\Ee}}, \psi_{D}$ and $\psi_{D_{u,\tau}}$.
\end{remark}
\begin{lemma}\label{lemm psi D u tau 0}
Let $(D, D_{u, \tau})\in \Mod_{\Oo_{\Ee}, \Oo_{\Ee_{u,\tau}}}(\varphi, \tau)$. We have a map $\psi\colon D_{u, \tau, 0}\to D_{u, \tau, 0}.$
\end{lemma}
\begin{proof}
We have the $\psi$ operator on $D_{u, \tau}$ by proposition \ref{prop Zp-case psi surjective}. Notice that $\psi$ commutes with $g\in \mathscr{G}_K$ by remark \ref{rem Zp-case psi commutes with g}, hence $\psi$ induces a $\ZZ_p$-linear endomorphism on $D_{u, \tau, 0}.$ Indeed, if $x\in D_{u, \tau, 0},$ then we have
\[(\gamma\otimes 1)x=(1+\tau_D+\cdots +\tau_D^{\chi(\gamma)-1})x.\]
Applying $\psi$ to both sides and by the commutativity we have
\[(\gamma\otimes 1)(\psi(x))=(1+\tau_D+\cdots +\tau_D^{\chi(\gamma)-1})\psi(x).\]
This implies $\psi(x)\in D_{u,\tau, 0}.$
\end{proof}
\begin{remark}
By proposition \ref{prop Zp-case psi surjective} and lemma \ref{lemm psi D u tau 0}, $\psi$ is surjective on $D$ and $D_{u,\tau,0}.$
\end{remark}
We now define the following complex:
\begin{definition}\label{def complex psi tau}
Let $(D, D_{u,\tau})\in \Mod_{\Oo_{\Ee},\Oo_{\Ee_{u,\tau}}}(\varphi, \tau).$ We define a complex $\Cc_{\psi, \tau}^{u}(D)$\index{$\Cc_{\psi, \tau}^{u}(D)$} as follows:
\[ \xymatrix{0\ar[rr] && D \ar[r] & D \oplus D_{u,\tau, 0} \ar[r] & D_{u, \tau, 0} \ar[r] & 0 \\
&& x \ar@{|->}[r] & ((\psi-1)(x), (\tau_D-1)(x)) &&\\
&&& (y,z) \ar@{|->}[r] & (\tau_D-1)(y)-(\psi-1)(z).
} \]
If $T\in \Rep_{\ZZ_p}(\mathscr{G}_K),$ we have in particular the complex $\Cc_{\psi, \tau}^{u}(\mathcal{D}(T)),$ which will also be simply denoted $\Cc_{\psi, \tau}^{u}(T).$\index{$\Cc_{\psi, \tau}^{u}(T)$}
\end{definition}
\begin{theorem}\label{prop quais-iso psi} The morphism of complexes
\[
\xymatrix{
\Cc^{u}_{\varphi,\tau}\colon & 0\ar[rr] && D \ar[rr]^-{(\varphi-1, \tau_D-1)} \ar@{=}[d] && D\oplus D_{u, \tau,0} \ar[rr]^-{(\tau_D-1)\ominus(\varphi-1)} \ar[d]^{-\psi\ominus \id} && D_{u, \tau, 0}\ar[rr] \ar[d]^{-\psi} && 0 \\
\Cc^{u}_{\psi,\tau}\colon & 0\ar[rr] && D \ar[rr]^-{(\psi-1, \tau_D-1)} && D\oplus D_{u, \tau,0} \ar[rr]^-{(\tau_D-1)\ominus(\psi-1)} && D_{u, \tau, 0}\ar[rr] && 0
} \]
is a quasi-isomorphism.
\end{theorem}
\begin{remark}\label{rem psi embedding Zp case} (1) The diagram in theorem \ref{prop quais-iso psi} is indeed a morphism of complexes.
\begin{proof}
As $\psi$ commutes with the Galois action, it induces a map $\psi\colon D_{u,\tau,0}\to D_{u,\tau,0}.$ We claim that ${\psi_{D_{u,\tau}}}_{|_{D}}=\psi_{D},$ and hence the diagram in theorem \ref{prop quais-iso psi} commutes. Indeed, we have the following commutative square
\[ \xymatrix{
\Oo_{\widehat{\Ee^{\ur}}} \ar@{^(->}[r] & \Oo_{\widehat{\Ff^{\ur}_{u}}}\\
\Oo_{\widehat{\Ee^{\ur}}} \ar@{^(->}[r] \ar[u]^{\varphi} & \Oo_{\widehat{\Ff^{\ur}_{u}}} \ar[u]^{\varphi}. }
\]
By lemma \ref{lemm Zp-version degree of sep clo}, $[ \Oo_{\widehat{\Ee^{\ur}}}:\varphi( \Oo_{\widehat{\Ee^{\ur}}})]=[\Oo_{\widehat{\Ff^{\ur}_{u}}}:\varphi(\Oo_{\widehat{\Ff^{\ur}_{u}}})]=p,$ hence we conclude by the construction of $\psi.$
\end{proof}
\item (2) We have diagram
\[
\xymatrix{
&0 \ar[r] & 0 \ar[rr]\ar[d] && D^{\psi=0} \ar[rr]^-{\tau_D-1} \ar[d] && D_{u,\tau, 0}^{\psi=0 } \ar[r]\ar[d]& 0\\
\Cc^{u}_{\varphi,\tau}\colon & 0\ar[r] & D \ar[rr]^-{(\varphi-1, \tau_D-1)} \ar@{=}[d] && D\oplus D_{u, \tau,0} \ar[rr]^-{(\tau_D-1)\ominus(\varphi-1)} \ar[d]^{-\psi\ominus \id} && D_{u, \tau, 0}\ar[r] \ar[d]^{-\psi} & 0 \\
\Cc^{u}_{\psi,\tau}\colon & 0 \ar[r] & D \ar[rr]^-{(\psi-1, \tau_D-1)} \ar[d]&& D\oplus D_{u, \tau,0} \ar[rr]^-{(\tau_D-1)\ominus(\psi-1)} \ar[d]&& D_{u, \tau, 0}\ar[r] \ar[d]& 0\\
& 0 \ar[r]& 0 \ar[rr] &&0 \ar[rr] &&0 \ar[r] &0.
} \]
As $\psi$ is surjective, the cokernel complex is trivial and it suffices to show that the kernel complex is acyclic, $\ie$the map
\[ D^{\psi=0} \xrightarrow{\tau_D-1} D_{u, \tau, 0}^{\psi=0} \]
is an isomorphism.
\end{remark}
To prove the theorem \ref{prop quais-iso psi}, we will start with the case of $\FF_p$-representations and then pass to the $\ZZ_p$-representations by d\'evissage.
\subsection{The case of \texorpdfstring{$\FF_p$}{\unichar{"0046}_p}-representations}
We assume in this subsection that $T\in \Rep_{\FF_p}(\mathscr{G}_K).$
\begin{lemma}\label{lemm point fixed by tau p n}
For all $r\in \NN,$ we have $(C^{\flat}_{u-\np})^{\mathscr{G}_{K_\zeta}}=F_{u,\tau}^{\tau^{p^r}}=k(\!(\eta^{1/p^\infty})\!).$
\end{lemma}
\begin{proof}
Let $z\in F_{u, \tau},$ write $z=\sum\limits_{i=m}^{\infty}f_i(\eta)u^i,$ where $m\in \ZZ$ and $f_i(X)\in k(\!(X^{1/p^{n_i}})\!)$ for some $n_i\in \NN.$ We have $\tau^{p^n}(z)=\sum\limits_{i=m}^{\infty}\varepsilon^{ip^n}f_i(\eta)u^i$ so that $\tau^{p^n}(z)=z$ implies that $\varepsilon^{ip^n}f_i(\eta)=f_i(\eta)$ and hence $f_i=0$ for $i\not=0.$ We conclude $z=f_0(\eta)\subset \bigcup\limits_{n\in \NN} k(\!(\eta^{1/p^n})\!)$ and hence $F_{u,\tau}^{\tau^{p^r}} \subset k(\!(\eta^{1/p^\infty})\!).$ Conversely, we have $ k(\!(\eta^{1/p^\infty})\!) \subset F_{u,\tau}^{\tau^{p^r}},$ as $k(\!(\eta^{1/p^\infty})\!)\subset F_{u,\tau}$ and $\tau$ acts trivially over $K_{\zeta}.$ Notice that $\mathscr{G}_{K_{\zeta}}=\la \mathscr{G}_L, \tau \ra$ and $F_{u,\tau}=(C^{\flat}_{u-\np})^{\mathscr{G}_L}$ by corollary \ref{coro F u tau = C GL}, so that $(C^{\flat}_{u-\np})^{\mathscr{G}_{K_\zeta}}=F_{u,\tau}^{\tau},$ which is $k(\!(\eta^{1/p^\infty})\!)$ by similar computations as above.
\end{proof}
\begin{lemma}\label{lemma varphi is surjective over non-perfect ring modified}
Put $\mathbf{D}(T)=(C^{\flat}_{u-\np}\otimes T)^{\mathscr{G}_{K_{\zeta}}}.$ We have a $\mathscr{G}_{K_{\zeta}}$-equivariant isomorphism
\[ C^{\flat}_{u-\np}\otimes_{\FF_p}T\simeq C^{\flat}_{u-\np}\otimes_{k(\!(\eta^{1/p^\infty})\!)}\mathbf{D}(T). \]
In particular, we have $\dim_{k(\!(\eta^{1/p^{\infty}})\!)}\mathbf{D}(T)=\dim_{\FF_p}T.$
\end{lemma}
\begin{proof}
Denote $\DD(T)=\big(k(\!( \eta^{1/p^{\infty}} )\!)^{\sep} \otimes_{\FF_p} T \big)^{\mathscr{G}_{K_{\zeta}}}$, then from the field of norm theory (of perfect fields) ($\cf$ \cite[Proposition 1.2.4]{Fon90}) we have the following $\mathscr{G}_{K_\zeta}$-equivariant isomorphism
\[ k(\!( \eta^{1/p^{\infty}} )\!)^{\sep} \otimes_{k(\!(\eta^{1/p^{\infty}})\!)} \DD(T) \simeq k(\!( \eta^{1/p^{\infty}} )\!)^{\sep} \otimes_{\FF_p} T. \]
Tensoring with $C_{u-\np}^{\flat}$ over $k(\!(\eta^{1/p^{\infty}})\!)^{\sep}$, we have
\begin{equation}\label{equ remark varphi is surjective over non-perfect ring modified}
C_{u-\np}^{\flat}\otimes_{k(\!(\eta^{1/p^{\infty}})\!)} \DD(T) \simeq C_{u-\np}^{\flat}\otimes_{\FF_p} T.
\end{equation}
Taking the points fixed by $\mathscr{G}_{K_{\zeta}}$ on both sides gives
\[ \big( C_{u-\np}^{\flat}\otimes_{k(\!(\eta^{1/p^{\infty}})\!)} \DD(T) \big)^{\mathscr{G}_{K_{\zeta}}} \simeq \big( C_{u-\np}^{\flat}\otimes_{\FF_p} T \big)^{\mathscr{G}_{K_{\zeta}}} = \mathbf{D}(T). \]
As $\DD(T)$ is fixed by ${\mathscr{G}_{K_{\zeta}}}$ from definition, the left hand side is
\[ \big( C_{u-\np}^{\flat}\otimes_{k(\!(\eta^{1/p^{\infty}})\!)} \DD(T) \big)^{\mathscr{G}_{K_{\zeta}}}= ( C_{u-\np}^{\flat} )^{\mathscr{G}_{K_{\zeta}}} \otimes_{k(\!(\eta^{1/p^{\infty}})\!)} \DD(T) = \DD(T) \]
by lemma \ref{lemm point fixed by tau p n}.
This proves that $\DD(T)=\mathbf{D}(T),$ hence equation \ref{equ remark varphi is surjective over non-perfect ring modified} gives what we want.
\end{proof}
\begin{lemma}\label{lemm x in D(T)}
Let $r\in \NN_{>0},$ we then have $\mathcal{D}(T)_{u, \tau}^{\tau_D^{p^r}}\subset \mathbf{D}(T).$
\end{lemma}
\begin{proof}
We have $\mathcal{D}(T)_{u,\tau}^{\tau_D^{p^r}}= (C^{\flat}_{u-\np}\otimes_{\FF_p} T)^{\mathscr{G}_{K_{\zeta}(\pi^{1/p^r})}}.$
Notice that by lemma \ref{lemma varphi is surjective over non-perfect ring modified}, we have
\begin{align*}
(C^{\flat}_{u-\np}\otimes_{\FF_p} T)^{\mathscr{G}_{K_{\zeta}(\pi^{1/p^r})}}&=(C^{\flat}_{u-\np})^{\mathscr{G}_{K_{\zeta}(\pi^{1/p^r})}}\otimes \mathbf{D}(T)\\
&=((C^{\flat}_{u-\np})^{\mathscr{G}_L})^{\tau^{p^r}}\otimes \mathbf{D}(T)\\
&=F_{u,\tau}^{\tau^{p^r}}\otimes \mathbf{D}(T)\\
&=\mathbf{D}(T).
\end{align*}
The last step follows from lemma \ref{lemm point fixed by tau p n}, as $\bigcup\limits_{n\in \NN} k(\!(\eta)\!)[\eta^{1/p^n}]\subset (C^{\flat}_{u-\np})^{\mathscr{G}_{K_{\zeta}}}.$
\end{proof}
\begin{proposition}\label{prop tau-1 high power inj}
The map $\frac{\tau_D^{p^r}-1}{\tau_D-1}\colon D_{u,\tau,0}^{\psi=0}\to D_{u,\tau^{p^r},0}^{\psi=0}$ is injective.
\end{proposition}
\begin{proof}
Take any $x\in D_{u, \tau, 0}^{\psi=0}$ with $\frac{\tau_D^{p^r}-1}{\tau_D-1}(x)=0,$ then in particular $(\tau_D^{p^r}-1)x=0.$ Hence $x\in D_{u, \tau}^{\tau_D^{p^r}, \psi=0}$ and $x\in \mathbf{D}(T)^{\psi=0}$ by lemma \ref{lemm x in D(T)}. Lemma \ref{lemm point fixed by tau p n} and lemma \ref{lemma varphi is surjective over non-perfect ring modified} imply that $x=\varphi(x^{\prime})$ for some $x^{\prime}\in \mathbf{D}(T)$ (as $\mathbf{D}(T)$ is \'etale and the base field $k(\!(\eta^{1/p^\infty})\!)$ is perfect) and hence $0=\psi(x)=\psi(\varphi(x^{\prime}))=x^{\prime}$ implies $x=0.$
\end{proof}
Recall that by remark \ref{rem psi embedding Zp case} (2), to prove theorem \ref{prop quais-iso psi} it suffices to prove $D^{\psi=0} \xrightarrow{\tau_D-1} D_{u, \tau, 0}^{\psi=0}$ being isomorphic: we firstly prove the case of $\FF_p$-representations, in several steps.
\subsubsection{The injectivity}
Recall that $F_0=k(\!(u)\!)$ embeds into $C^{\flat}$ by $u\mapsto \tilde{\pi}.$
\begin{lemma}\label{lemma I.3.1} We have $F_0^{\tau=1}=k.$
\end{lemma}
\begin{proof}
Recall that $\tau(u)=\varepsilon u= (\eta+1)u$ and $\tau(u^i)=(\eta +1)^iu^i.$ Suppose $x=\sum\limits_{i=i_0}^{\infty}\lambda_i u^i\in F_0^{\tau=1}$ with $\lambda_i\in k,$ then $\tau(x)=\sum\limits_{i=i_0}^{\infty}\lambda_i (\eta+1)^i u^i.$
Hence $\lambda_i=\lambda_i(\eta+1)^i,$ so that $\lambda_i((\eta+1)^i-1)=0$ for all $i\geq i_0.$ If $i\neq 0,$ then $(\eta+1)^i\neq 1,$ hence $(\eta+1)^i-1\in k(\!(\eta)\!)^{\times}$ so that $\lambda_i=0.$ In particular, $x=\lambda_0\in k.$
\end{proof}
\begin{lemma}\label{lemm Brinon saves}
We have $k(\!(\eta^{1/p^{\infty}})\!)^{\gamma=1}=k.$
\end{lemma}
\begin{proof}
We have $k(\!(\eta^{1/p^{\infty}})\!)\subset (C^{\flat})^{\mathscr{G}_{K_{\zeta}}},$ hence $k(\!(\eta^{1/p^{\infty}})\!)^{\gamma=1}\subset (C^{\flat})^{\mathscr{G}_{K}}.$ As $C^{\flat}=\pLim{x\mapsto x^p} C,$ we have
\[(C^{\flat})^{\mathscr{G}_K}=\pLim{x\mapsto x^p}K.\]
Let $x=(x_n)_n\in \pLim{x\mapsto x^p}K,$ we have $x_0=x_n^{p^n}$ for all $n\in \NN.$ In particular, we have $v(x_0)=p^nv(x_n).$ If $x\neq0,$ this implies $v(x_n)=0$ for all $n\in \NN.$ Let $\overline{x_0}$ be the image of $x_0$ in $k=\Oo_K/(\pi)$ and $y=\big([\overline{x_0}^{p{^{-n}}}] \big)_n\in \pLim{x\mapsto x^p}K,$ then $y^{-1}x=(z_n)_n\in \pLim{x\mapsto x^p}K$ satisfies $z_n\equiv 1$ mod $\pi\Oo_K$ for all $n\in \NN.$ This implies $z_n=z_{n+m}^{p^m}\equiv 1$ mod $\pi^{m+1}\Oo_K$ for all $m\in \NN,$ hence $z_n=1$ for all $n\in\NN.$ This shows that $x=y,$ and that the map
\begin{align*}
k &\to \pLim{x\mapsto x^p} K=(C^{\flat})^{\mathscr{G}_K}\\
\alpha &\mapsto \big([\alpha^{p^{-n}}] \big)_n
\end{align*}
is a ring isomorphism. As $k\subset k(\!(\eta^{1/p^{\infty}})\!)^{\gamma=1}\subset (C^{\flat})^{\mathscr{G}_K}$, this shows that $k=k(\!(\eta^{1/p^{\infty}})\!)^{\gamma=1}=(C^{\flat})^{\mathscr{G}_K}.$
\end{proof}
\begin{proposition}
The natural map $k(\!(\eta^{1/p^{\infty}})\!)\otimes_k \mathbf{D}(T)^{\gamma=1}\to \mathbf{D}(T)$ is injective.
\end{proposition}
\begin{proof}
We use the standard argument: assume it is not and let $x=\sum\limits_{i=1}^r\lambda_i\otimes \alpha_i,$ with $\lambda_i\in k(\!(\eta^{1/p^{\infty}})\!)$ and $\alpha_i\in \mathbf{D}(T)^{\gamma=1}$ for all $i,$ be a nonzero element in the kernel such that $r$ is minimal. Dividing by $\lambda_r$ we may assume that $\lambda_r=1.$ As $\sum\limits_{i=1}^{r}\gamma(\lambda_i)\otimes \alpha_i$ maps to $\gamma(\sum\limits_{i=1}^{r}\lambda_i\alpha_i)=0,$ the element $\sum\limits_{i=1}^{r-1}(\gamma-1)(\lambda_i)\otimes \alpha_i$ lies in the kernel, by the minimality of $r,$ we have $\gamma(\lambda_i)=\lambda_i,$ $\ie$$\lambda_i\in k$ for all $i\in \{ 1, \ldots, r\}$ ($\cf$ lemma \ref{lemm Brinon saves}). Then $x=1\otimes(\sum\limits_{i=1}^r\lambda_i\alpha_i)=0,$ which is a contradiction to the assumption. Hence the map is injective.
\end{proof}
\begin{corollary}\label{coro dim Brinon}
We have $\dim_k\mathbf{D}(T)^{\gamma=1}\leq \dim_{\FF_p}T.$
\end{corollary}
\begin{lemma}
\label{lemm 1-tau inj Cherbonnier}
If $x\in D^{\psi=0}$ is such that $\tau_D(x)=x,$ then $x=0.$
\end{lemma}
\begin{proof}
Let $x\in D^{\psi=0}$ be such that $\tau_D(x)=x.$ This shows that $x,$ seen as an element of $C^{\flat}_{u-\np}\otimes_{\FF_p}T,$ is fixed by $\mathscr{G}_K:$ it belongs to $\mathbf{D}(T)^{\gamma=1}.$ By corollary \ref{coro dim Brinon}, the latter is a finite dimensional $k$-vector space. It is endowed with the restriction $\varphi$ of the Frobenius map, and this restriction is injective. As $k$ is perfect, this implies that $\varphi\colon \mathbf{D}(T)^{\gamma=1}\to \mathbf{D}(T)^{\gamma=1}$ is bijective: there exists $y\in \mathbf{D}(T)^{\gamma=1}$ such that $x=\varphi(y).$ Then $y=\psi\varphi(y)=\psi(x)=0,$ hence $x=\varphi(y)=0.$
\end{proof}
\begin{corollary}\label{cor injective before "the trivial case"}
The map $D^{\psi=0} \xrightarrow{\tau_D-1} D_{u, \tau, 0}^{\psi=0}$ is injective.
\end{corollary}
\subsubsection{The trivial case}
Recall that $\psi\colon F_0\to F_0$ is given by the formula $\psi(x)=\frac{1}{p}\varphi^{-1}(\Tr_{F_0/\varphi(F_0)}(x)).$ The elements $1, u, \ldots, u^{p-1}$ form a basis of $F_0$ over $\varphi(F_0).$
\begin{lemma}\label{lemm psi mini poly}
Let $x=\sum\limits_{i=0}^{p-1}x_iu^i$ be an element of $F_0$ with $x_i\in \varphi(F_0).$ Then we have $\psi(x)=\varphi^{-1}(x_0).$
\end{lemma}
\begin{proof}
For $1\leq i<p,$ we have $\psi(u^i)=0$ as $\Tr_{F_0/\varphi(F_0)}(u^{i})=0.$ Indeed, the minimal polynomial of $u^i$ is $f(X)=X^p-u^{pi}\in \varphi(F_0)[X]$ when $1\leq i <p.$
\end{proof}
\begin{corollary}\label{coro psi=0 3.3.23}
An element $x=\sum\limits_{i=0}^{p-1}x_iu^i\in F_0$ with $x_i\in \varphi(F_0)$ is killed by $\psi$ if and only if $x_0=0.$
\end{corollary}
\begin{proof}
By the lemma \ref{lemm psi mini poly} and notice that $\varphi$ is injective.
\end{proof}
Let $z=\sum\limits_{i=1}^{p-1}u^iz_i\in {F_0}^{\psi=0}$ with $z_i\in \varphi(F_0)=k(\!(u^p)\!).$ More precisely, write $z_i=\sum\limits_{j=n_i}^{+\infty}b_{ij}u^{pj}$ with $b_{ij}\in k.$ Then $z=\sum\limits_{i=1}^{p-1}\sum\limits_{j=n_i}^{+\infty}b_{ij}u^{i+pj}$ with $b_{ij}\in k$ so that
\begin{equation}\label{eq tau-1}
(\tau-1)z=\sum\limits_{i=1}^{p-1}\sum\limits_{j=n_i}^{\infty}b_{ij}u^{i+pj}(\varepsilon^{i+pj}-1).
\end{equation}
\begin{lemma}\label{lemm inverse of tau-1}\label{coro inverse almost}
The map $\tau-1\colon F_0^{\psi=0}\to F_{u,\tau,0}^{\psi=0}$ is surjective.
\end{lemma}
\begin{proof}
Let $x \in F_{u,\tau,0}^{\psi=0}\subset k(\!(u, \eta^{1/p^{\infty}} )\!)^{\psi=0}:$ by corollary \ref{coro psi=0 3.3.23} we can write uniquely $x=\sum\limits_{i=1}^{p-1}u^i x_i$ with \[x_i\in \varphi\big(k(\!(u, \eta^{1/p^{\infty}} )\!)^{\psi=0}\big).\]
We can write
\[x_i=\sum\limits_{j=m_i}^{\infty}u^{pj}f_{ij}(\eta)\] with $m_i\in \ZZ$ and $f_{ij}(X)\in k(\!(X^{1/p^{n_{ij}}})\!)$ for some $n_{ij}\in \NN$, hence \[x=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}u^{i+pj} f_{ij}(\eta).\]
By definition, $x\in F_{u, \tau, 0}$ implies
\[\gamma(x)=(1+\tau+\cdots + \tau^{\chi(\gamma)-1})(x).\]
The left hand side is (recall that $\eta=\varepsilon-1$)
\[ \gamma(x)=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}u^{i+pj} f_{ij}(\gamma(\eta))=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}u^{i+pj} f_{ij}(\varepsilon^{\chi(\gamma)}-1), \]
and the right hand side is
\[ (1+\tau+\cdots+\tau^{\chi(\gamma)-1})(x)=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}u^{i+pj}\cdot(1+\varepsilon^{i+pj}+ \varepsilon^{2(i+pj)} + \cdots + \varepsilon^{(\chi(\gamma)-1)(i+pj)})\cdot f_{ij}(\eta).\] This implies that for all $i, j,$ we have
\[ f_{ij}(\gamma(\eta))=f_{ij}(\varepsilon^{\chi(\gamma)}-1)=(1+\varepsilon^{i+pj}+ \varepsilon^{2(i+pj)}+ \cdots + \varepsilon^{(\chi(\gamma)-1)(i+pj)})\cdot f_{ij}(\eta).\]
Put $l=\chi(\gamma)$ and $m=i+pj$ (notice $m\neq 0$ as $p\nmid i$), the condition translates into
\[\gamma(f_{ij}(\eta))=\frac{\varepsilon^{lm}-1}{\varepsilon^m-1}f_{ij}(\eta),\]
$\ie$
\[\gamma\Big( \frac{f_{ij}(\eta)}{\varepsilon^m-1} \Big) = \frac{f_{ij}(\eta)}{\varepsilon^m-1}. \]
As $\frac{f_{ij}(\eta)}{\varepsilon^m-1}\in k(\!(\eta^{1/p^{\infty}})\!),$ we have $\frac{f_{ij}(\eta)}{\varepsilon^m-1}\in k$ by lemma \ref{lemm Brinon saves}: there exist $b_{i,j}\in k$ such that \[x=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}b_{ij}u^{i+pj}(\varepsilon^{i+pj}-1).\]
By equation (\ref{eq tau-1}), an inverse image of $x=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}b_{ij}u^{i+pj}(\varepsilon^{i+pj}-1)$ is \[ (\tau-1)^{-1}(x):=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}b_{ij}u^{i+pj}.\]
\end{proof}
\begin{corollary}\label{coro form of F u tau 0}
An element $x\in \Oo_{\Ee_{u, \tau, 0}}^{\psi=0}$ can be written in the form
\[ x=\sum\limits_{i=1}^{p-1}\sum\limits_{j\in \ZZ}c_{ij}u^{i+pj}([\varepsilon]^{i+pj}-1) \]
with $c_{ij}\in \W(k)$ such that $\Lim{j \to -\infty}c_{ij}=0$ for all $i\in \{1, \ldots, p-1\}.$
\end{corollary}
\begin{proof}
By lemma \ref{coro inverse almost}, elements of $F_{u,\tau,0}^{\psi=0}$ can be written in the form
\[x=\sum\limits_{i=1}^{p-1}\sum\limits_{j=m_i}^{\infty}b_{ij}u^{i+pj}(\varepsilon^{i+pj}-1) \text{ with } b_{ij}\in k.\] We then conclude by d\'evissage.
\end{proof}
\begin{corollary}
The map $\tau-1\colon F_0^{\psi=0}\to F_{u,\tau, 0}^{\psi=0}$ is bijective.
\end{corollary}
\begin{proof}
This follows from corollary \ref{cor injective before "the trivial case"} and lemma \ref{coro inverse almost}.
\end{proof}
We have proved the bijection for trivial representations, and we now prove the general case.
\subsubsection{The general case}
Recall that $D=\mathcal{D}(T)$ is an \'etale $(\varphi,\tau)$-module; the natural map $\varphi^\ast\colon F_0\otimes_{\varphi,F_0}D\to D$ is an isomorphism. Let $(e_1,\ldots,e_d)$ be a basis of $D$ over $F_0,$ then $(\varphi(e_1),\ldots,\varphi(e_d))$ is again a basis, $\ie$$D=\oplus_{i=1}^dF_0\varphi(e_i).$
\begin{lemm}\label{lemm psi semi-linearity in another sense}
If $x=\sum\limits_{i=1}^d\lambda_i\varphi(e_i)\in D,$ then $\psi(x)=\sum\limits_{i=1}^d\psi(\lambda_i)e_i.$
\end{lemm}
\begin{proof}
We have $D=(F_0^{\sep}\otimes_{\FF_p}T)^{\mathscr{G}_{K_\pi}}.$ Let $(v_1,\ldots,v_d)$ be a basis of $T$ over $\FF_p:$ we can write $e_i=\sum\limits_{j=1}^d\alpha_j\otimes v_j,$ so that $\varphi(e_i)=\sum\limits_{j=1}^d\alpha_j^p\otimes v_j,$ hence $\psi(\lambda_i\varphi(e_i))=\sum\limits_{j=1}^d\psi(\lambda_i)\alpha_j\otimes v_j=\psi(\lambda_i)e_i$ for all $i\in\{1,\ldots,d\}.$
\end{proof}
\begin{coro}
We have $\sum\limits_{i=1}^d\lambda_i\varphi(e_i)\in D^{\psi=0}$ if and only if $\lambda_i\in F_0^{\psi=0}$ for all $i\in\{1,\ldots,d\}.$
\end{coro}
\begin{proof}
This follows from lemma \ref{lemm psi semi-linearity in another sense}.
\end{proof}
\begin{coro}\label{lemm psi semi-linearity in another sense: tau case}
If $\mu_1,\ldots,\mu_d\in F_{u,\tau},$ we have $\sum\limits_{i=1}^d\mu_i\varphi(e_i)\in D_{u,\tau}^{\psi=0}$ if and only if $\mu_i\in F_{u,\tau}^{\psi=0}$ for all $i\in\{1,\ldots,d\}.$
\end{coro}
\begin{proof}
The proof is similar to that of lemma \ref{lemm psi semi-linearity in another sense}.
\end{proof}
\begin{lemm}\label{lemm5.18}
Let $n\in\ZZ$ and $f=\sum\limits_{j=n}^\infty\lambda_ju^j\in F_{u,\tau},$ where $\lambda_j\in k(\!(\eta^{1/p^\infty})\!):=\bigcup\limits_{n=0}^\infty k(\!(\eta^{1/p^n})\!).$ Then $\psi(f)=0$ if and only if $p\mid j\Rightarrow\lambda_j=0.$
\end{lemm}
\begin{proof}
If $\lambda\in k(\!(\eta^{1/p^{\infty}})\!)$ and $i\in \ZZ$, we have
\[ \psi(\lambda u^i)=\psi\big(\varphi(\varphi^{-1}(\lambda))u^i)=\varphi^{-1}(\lambda)\psi(u^i), \]
and
\[ \psi(u^i)=\begin{cases}
u^{i/p},\ \text{ if } p|i\\
0,\ \text{ else.}
\end{cases} \]
We have thus
\[ \psi(f)=\sum\limits_{j\geq n,\, p|j}\varphi^{-1}(\lambda_j)u^{j/p}, \]
so that $\psi(f)=0$ if and only if $\varphi^{-1}(\lambda_j)=0$, $\ie$$\lambda_j=0$ for all $j$ such that $p|j$.
\end{proof}
\begin{notation}
We have $k[\![u,\eta]\!]\subset k(\!(\eta)\!)[\![u]\!],$ so that there is an inclusion $k(\!(u,\eta)\!)\subset k(\!(\eta)\!)(\!(u)\!),$ hence an inclusion $F_{u,\tau}=k(\!(u,\eta^{1/p^\infty})\!)\subset\bigcup\limits_{n=0}^\infty k(\!(\eta)\!)(\!(u)\!)[\eta^{1/p^n}]$ ($\cf$ notation \ref{rem fixed finite degree}). Put $$\mathscr{D}=\bigoplus_{i=1}^dk[\![u]\!]\varphi(e_i)$$\index{$\mathscr{D}$} and $$\mathscr{D}_{u,\tau}=\bigoplus_{i=1}^dk(\!(\eta)\!)[\![u]\!]\varphi(e_i).$$\index{$\mathscr{D}_{u,\tau}$} By construction, we have $D_{u,\tau}\subset\bigcup\limits_{n=0}^\infty\mathscr{D}_{u,\tau}\big[\eta^{1/p^n},\frac{1}{u}\big]$ ($\cf$ definition \ref{def phi tau Fp u}) and $\mathscr{D}_{u,\tau}$ is $u$-adically separated and complete.
\end{notation}
\begin{lemma}\label{lem cf-1 proof of thm 3.32}
Assume $n, k\in \NN$ are such that $\tau_D^{k}(\varphi(e_i))\in\mathscr{D}_{u,\tau}[\eta^{1/p^n}].$ Then for any $m\in \NN,$ $y\in \frac{1}{u^m}\mathscr{D}_{u,\tau}[\eta^{1/p^n}]$ implies $\tau_D^{k}(y)\in\frac{1}{u^m}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].$
\end{lemma}
\begin{proof}
Write $y=\sum_{i=1}^d\lambda_i\varphi(e_i)$ with $\lambda_i\in\frac{1}{u^m}k(\!(\eta)\!)[\![u]\!].$ Then $\tau_D^{k}(y)=\sum_{i=1}^d\tau^{k}(\lambda_i)\tau_D^{k}(\varphi(e_i)).$ By assumption we know that $\tau_D^{k}(\varphi(e_i))\in\mathscr{D}_{u,\tau}[\eta^{1/p^n}]:$ it remains to control $\tau^{k}(\lambda_i).$
Recall that $\tau(u)=\varepsilon u=u+\eta u.$ Write $\lambda_i=\frac{1}{u^m}\mu_i$ with $\mu_i\in k(\!(\eta)\!)[\![u]\!]:$ we have $\tau^{k}(\lambda_i)=\frac{1}{u^m\varepsilon^{km}}\tau^{k}(\mu_i)\in \frac{1}{u^m}k(\!(\eta)\!)[\![u]\!]$ as $\varepsilon=1+\eta\in k[\![\eta]\!]^{\times}.$
\end{proof}
\begin{proof}[Proof of theorem \ref{prop quais-iso psi} for $\FF_p$-representations:]
Let $y\in D_{u,\tau,0}^{\psi=0}:$ there exist $n,m\in\NN$ such that $y\in\frac{1}{u^m}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].$ By continuity, there exists $r\in\NN$ such that $\tau_D^{p^r}(e_i)\equiv e_i\mod u^m\mathscr{D}_{u,\tau}[\eta^{1/p^n}]$ (making $n$ larger if necessary), whence $\tau_D^{p^r}(\varphi(e_i))\equiv\varphi(e_i)\mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}]$ (recall that $\tau_D$ and $\varphi$ commute). Put \ft{\cf corollary \ref{coro precise computation of map between different tau power -1} for more details.}
$$z=\tfrac{\tau_D^{p^r}-1}{\tau_D-1}(y)=(1+\tau_D+\cdots+\tau_D^{p^r-1})(y)\in D_{u,\tau^{p^r},0}^{\psi=0}$$
There exists $l\in \NN$, such that $z\in\frac{1}{u^l}\mathscr{D}_{u,\tau}[\eta^{1/p^n}]^{\psi=0}$ (making $n$ larger if necessary) by lemma \ref{lem cf-1 proof of thm 3.32} and the fact that $\psi$ commutes with $\tau_D.$ By lemma \ref{lemm5.18}, we can write $z=\sum\limits_{i=1}^d\sum\limits_{\substack{j\geq-l\\ p\nmid j}}f_{i,j}(\eta)u^j\varphi(e_i)$ with $f_{i,j}(\eta)\in k(\!(\eta^{1/p^n})\!).$ An argument similar to the proof of lemma \ref{lemm inverse of tau-1} shows that for all $i,j,$ there exists $c_{i,j}\in k$ such that $f_{i,j}(\eta)=c_{i,j}(\varepsilon^{jp^r}-1)\ \mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].$
Indeed $z\in D_{u,\tau^{p^r},0}^{\psi=0}$ implies that
$$(\gamma\otimes 1)z=(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots +\tau_D^{( \chi{(\gamma)}-1) p^r } )(z).$$
The left hand side is
$$(\gamma\otimes 1)z=(\gamma\otimes 1)\big(\sum\limits_{i=1}^d\sum\limits_{\substack{j\geq-l\\ p\nmid j}}f_{i,j}(\eta)u^j\varphi(e_i)\big)=\sum\limits_{i=1}^d\sum\limits_{\substack{j\geq-l\\ p\nmid j}}f_{i,j}(\eta^{\chi(\gamma)})u^j\varphi(e_i)$$
as $\varphi(e_i)$ are all fixed by $\mathscr{G}_{K_{\pi}},$ hence in particular fixed by $\gamma.$ For the right hand side, we have
\begin{align*}
&(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots +\tau_D^{( \chi{(\gamma)}-1) p^r } )(z)\\
\equiv& \sum\limits_{i=1}^d\sum\limits_{\substack{j\geq-l\\ p\nmid j}}f_{i,j}(\eta)u^j(1+\varepsilon^{jp^r}+\varepsilon^{2jp^r}+\cdots + \varepsilon^{(\chi(\gamma)-1)jp^r})\varphi(e_i)\ \mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}]\\
=& \sum\limits_{i=1}^d\sum\limits_{\substack{j\geq-l\\ p\nmid j}}f_{i,j}(\eta)u^j\big( \frac{\varepsilon^{\chi(\gamma)j^{p^r}-1}}{\varepsilon^{jp^r}-1} \big) \varphi(e_i)
\end{align*}
as $\tau_D^{p^r}(\varphi(e_i))\equiv\varphi(e_i)\mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].$ Hence for all $i\in\{1, \ldots, d\}$ and $j<pm,$ there exists $c_{i,j}\in k$ such that $f_{i,j}(\eta)=c_{i,j}(\varepsilon^{jp^r}-1)$ ($\cf$ lemma \ref{lemm inverse of tau-1}). Hence
\[z=\sum\limits_{i=1}^d\sum\limits_{\substack{p\nmid j \\ -l\leq j\leq pm}}c_{i,j}(\varepsilon^{jp^r}-1)u^j\varphi(e_i)\ \mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].\]
Put
$$x_0=\sum_{i=1}^d\Big(\sum_{\substack{p\nmid j\\-l\leq j\leq pm}} c_{i,j}u^j\Big)\varphi(e_i)\in\tfrac{1}{u^m}\mathscr{D}.$$
For all $i\in\{1,\ldots,d\},$ we can write $\tau_D^{p^r}(\varphi(e_i))=\varphi(e_i)+u^{pm}g_{i,j}$ with $g_{i,j}\in\mathscr{D}_{u,\tau}[\eta^{1/p^n}]:$ we have
\[ (\tau_D^{p^r}-1)(x_0)=\sum\limits_{i=1}^d\sum\limits_{\substack{p\nmid j\\-l\leq j\leq pm}} c_{i,j}\Big(\varepsilon^{jp^r}u^j(\varphi(e_i)+u^{pm}g_{i,j})-u^j\varphi(e_i)\Big)\equiv z-z_1 \ \mod u^{pm}\mathscr{D}_{u,\tau}[\eta^{1/p^n}], \]
where \[ z_1=\sum\limits_{i=1}^d\sum\limits_{\substack{j\geq pm\\ p\nmid j}}f_{i,j}(\eta)u^j\varphi(e_i) -\sum\limits_{i=1}^d\sum\limits_{\substack{p\nmid j\\-l\leq j\leq pm}} c_{i,j}\varepsilon^{jp^r}u^{j+pm}g_{i,j}\in u^{pm-l}\mathscr{D}_{u,\tau}[\eta^{1/p^n}].\]
By construction, we have $\psi(x_0)=0,$ which implies $\psi\big((\tau_D^{p^r}-1)(x_0)\big)=0:$ as $\psi(z)=0,$ we have $\psi(z_1)=0$ as well. Note also that $\tau_D^{p^r}-1$ maps $D$ into $D_{u,\tau^{p^r},0}:$ as $z\in D_{u,\tau^{p^r},0},$ we also have $z_1\in D_{u,\tau^{p^r},0}.$ This shows that we can carry on the preceding construction, and build sequences $(z_s)_{s\in\NN}$ in $D_{u,\tau^{p^r},0}$ and $(x_s)_{s\in\NN}$ in $D$ such that $z_0=z,$ $z_s\in u^{spm-l}\mathscr{D}_{u,\tau}[\eta^{1/p^n}],$ $x_s\in u^{spm-l}\mathscr{D}[\eta^{1/p^n}]$ and $(\tau_D^{p^r}-1)(x_\ell)=z_s-z_{s+1}$ for all $s\in\NN.$ The series $x=\sum\limits_{s=0}^\infty x_s$ converges in $\frac{1}{u^{l}}\mathscr{D},$ and summing all the equalities gives $(\tau_D^{p^r}-1)(x)=z.$
This shows that $\frac{\tau_D^{p^r}-1}{\tau_D-1}\big((\tau_D-1)(x)\big)=(\tau_D^{p^r}-1)(x)=\frac{\tau_D^{p^r}-1}{\tau_D-1}(y):$ as $\frac{\tau_D^{p^r}-1}{\tau_D-1}\colon D_{u,\tau,0}^{\psi=0}\to D_{u,\tau^{p^r},0}^{\psi=0}$ is injective by proposition \ref{prop tau-1 high power inj}, we get $y=(\tau_D-1)(x),$ showing that $y$ belongs to the image of $\tau_D-1.$ As this holds for all $y\in D_{u,\tau,0}^{\psi=0},$ this proves the surjectivity of $\tau_D-1: D^{\psi=0}\to D_{u,\tau,0}^{\psi=0}$. Together with corollary \ref{cor injective before "the trivial case"}, we finish the proof of theorem \ref{prop quais-iso psi} for $\FF_p$-representations.
\end{proof}
\begin{corollary}\label{coro Fp bijection thm} For all $r\in\NN_{\geq0},$ the maps $\tau_D^{p^r}-1\colon D^{\psi=0}\to D_{u,\tau^{p^r},0}^{\psi=0}$ and $\frac{\tau_D^{p^r}-1}{\tau_D-1}\colon D_{u,\tau,0}^{\psi=0}\to D_{u,\tau^{p^r},0}^{\psi=0}$ are bijective.
\end{corollary}
\begin{proof}
Replacing $T$ by its restriction to $\mathscr{G}_{K_r}$ and replacing its $(\varphi,\tau)$-module by the corresponding $(\varphi,\tau^{p^r})$-module, $\ie\tau_D$ by $\tau_D^{p^r},$ the $\FF_p$-case of theorem \ref{prop quais-iso psi} thus implies that the map $\tau_D^{p^r}-1\colon D^{\psi=0}\to D_{u,\tau^{p^r},0}^{\psi=0}$ is bijective. The statement about $\frac{\tau_D^{p^r}-1}{\tau_D-1}$ follows.
\end{proof}
\bigskip
\subsection{Proof of theorem \ref{prop quais-iso psi}: the quasi-isomophism}
\begin{lemma}\label{lemm tue par psi is exact}
Let $0\to T^{\prime}\to T\to T^{\prime\prime}\to 0$ be a short exact sequence in $\Rep_{\ZZ_p}(\mathscr{G}_K),$ we then have a short exact sequence
\[ 0\to \mathcal{D}(T^{\prime})^{\psi=0}\to \mathcal{D}(T)^{\psi=0}\to \mathcal{D}(T^{\prime\prime})^{\psi=0}\to 0. \]
\end{lemma}
\begin{proof}
We know by lemma \ref{Exactness D-tau-0 Zp} that $\mathcal{D}(-)$ is an exact functor and hence we have the short exact sequence $0\to \mathcal{D}(T^{\prime})\to \mathcal{D}(T)\to \mathcal{D}(T^{\prime\prime})\to 0.$ Now we consider the following diagram of complexes:
\[\xymatrix{ & \mathcal{D}(T^{\prime})^{\psi=0} \ar[r] \ar[d]& \mathcal{D}(T)^{\psi=0} \ar[r] \ar[d]& \mathcal{D}(T^{\prime\prime})^{\psi=0} \ar[d]& \\
0 \ar[r] & \mathcal{D}(T^{\prime}) \ar[r] \ar[d]^{\psi}& \mathcal{D}(T) \ar[r] \ar[d]^{\psi}& \mathcal{D}(T^{\prime\prime}) \ar[r] \ar[d]^{\psi}& 0\\
0 \ar[r] & \mathcal{D}(T^{\prime}) \ar[r] & \mathcal{D}(T) \ar[r] & \mathcal{D}(T^{\prime\prime}) \ar[r] & 0.
}\]
Since $\psi$ is surjective, by snake lemma we get the following short exact sequence: \[ 0\to \mathcal{D}(T^{\prime})^{\psi=0} \to \mathcal{D}(T)^{\psi=0} \to \mathcal{D}(T^{\prime\prime})^{\psi=0}\to 0. \]
\end{proof}
\begin{proposition}\label{prop Zp-case kernel complex trivial}
Let $T\in \Rep_{\ZZ_p}(\mathscr{G}_{K}),$ we then have a bijection $\mathcal{D}(T)^{\psi=0}\xrightarrow{\tau_D-1} \mathcal{D}(T)_{u,\tau, 0}^{\psi=0}.$
\end{proposition}
\begin{proof}
Notice that $\mathcal{D}(T/(p^n))=\mathcal{D}(T)/(p^n)$ and $T,$ $\mathcal{D}(T)$ are both $p$-adically complete. Hence it suffices to prove the cases when $T$ is killed by $p^n$ with $n\in \NN.$ We use induction on $n,$ the case $n=1$ being corollary \ref{coro Fp bijection thm} with $r=1.$ Suppose $T$ is killed by $p^n.$ Put $T^{\prime}=p^{n-1}T,$ $T^{\prime\prime}=T/T^{\prime}$ and consider the exact sequence in $\Rep_{\ZZ_p}(\mathscr{G}_K)$ \[0\to T^{\prime}\to T\to T^{\prime\prime}\to 0. \]
We then have the following commutative diagram
\[ \xymatrix{ 0 \ar[r] & \mathcal{D}(T^{\prime})^{\psi=0} \ar[r] \ar[d]_{\tau_D-1}& \mathcal{D}(T)^{\psi=0} \ar[r] \ar[d]_{\tau_D-1}& \mathcal{D}(T^{\prime\prime})^{\psi=0} \ar[d]_{\tau_D-1} \ar[r]& 0 \\
0 \ar[r] & \mathcal{D}(T^{\prime})_{u, \tau, 0}^{\psi=0} \ar[r]& \mathcal{D}(T)_{u, \tau, 0}^{\psi=0} \ar[r]& \mathcal{D}(T^{\prime\prime})_{u, \tau, 0}^{\psi=0} &
} \]
The first line is exact by lemma \ref{lemm tue par psi is exact} and the second from the fact $\mathcal{D}(-)_{u, \tau, 0}$ is left exact and then we apply the functor $(-)^{\psi=0}$ which is also left exact. As the first and third vertical maps are isomorphisms by induction hypothesis, so is that in the middle. We then finish the proof by passing to the limit.
\end{proof}
\begin{remark}
The theorem \ref{prop quais-iso psi} follows from proposition \ref{prop Zp-case kernel complex trivial}.
\end{remark}
\section{The \texorpdfstring{$(\varphi, \tau^{p^r})$}{(varphi, tau p powers)}-modules}\label{section 3.4 phi tau}
\begin{notation}\label{not Kr}
We put $K_r=K(\pi_{r})$\index{$K_r$} for $r\in \NN$ and we have the following diagram:
\[ \begin{tikzcd}[every arrow/.append style={dash}]
&&\overline{K} \arrow[ldd, bend right, "\mathscr{G}_{K_{\pi}}"] \arrow[llddd, bend right, "\mathscr{G}_{K_{r}}"'] \arrow[d, "\mathscr{G}_L"]\arrow[rrddd, bend left, "\mathscr{G}_{K_{\zeta}}"]&&\\
&&L\arrow[ld, "\overline{\la \gamma \ra}"]\arrow[ddd]\arrow[rd,"\overline{\la \tau^{p^r} \ra}"']&&\\
&K_{\pi}\arrow[ld]& & K_{\zeta}K_r\arrow[rd]&\\
K_r \arrow[rrd]&&&&K_{\zeta}\arrow[lld, "\Gamma=\overline{\la\gamma \ra}"]\\
&& K &
\end{tikzcd}
\]
\end{notation}
\begin{definition}\label{def (phi, tau_p^r)-modules normal version}
Let $r\in \NN$ and then a \emph{$(\varphi, \tau^{p^r})$-module over $(F_0, F_{\tau})$} is the data:
\item (1) an \'etale $\varphi$-module $D$ over $F_0$;
\item (2) a $\tau^{p^r}$-semi-linear map $\tau_D^{p^r}$ on $D_{\tau}:=F_{\tau}\otimes_{F_0}D$ which commutes with $\varphi_{F_{\tau}}\otimes \varphi_D$ (where $\varphi_{F_{\tau}}$ is the Frobenius map on $F_{\tau}$ and $\varphi_D$ the Frobenius map on $D$) and such that
\[ (\forall x\in M) \, (g\otimes 1)\circ \tau_D^{p^r}(x)=(\tau_D^{p^r})^{\chi(g)}(x), \]
for all $g\in \mathscr{G}_{K_{\pi}}/\mathscr{G}_L$ such that $\chi(g)\in\NN.$\\
We denote $\Mod_{F_0, F_{\tau}}(\varphi,\tau^{p^r})$\index{$\Mod_{F_0, F_{\tau}}(\varphi,\tau^{p^r})$} the corresponding category.
\end{definition}
By \cite[Remark 1.15]{Car13}, we have an equivalence of categories between $\Rep_{\FF_p}(\mathscr{G}_{K_r})$ and the category of $(\varphi, \tau^{p^r})$-modules over $(F_0, F_{\tau}).$
\begin{notation}\label{not D u tau normla version}
Let $(D, D_{\tau})\in \Mod_{F_0, F_{\tau}}(\varphi,\tau^{p^r}).$ We put
\[D_{\tau^{p^r}, 0}:=\big\{ x\in D_{\tau};\ (\forall g\in \mathscr{G}_{K_{\pi}})\ \chi(g)\in \ZZ_{>0} \Rightarrow (g\otimes 1)(x)=x+\tau_D^{p^r}(x)+\tau_D^{2p^r}(x)+\cdots +\tau_D^{p^r(\chi(g)-1)}(x) \big\}. \]\index{$D_{\tau^{p^r}, 0}$}
By similar arguments as that of lemma \ref{Replace g witi gamma}, we have
\[D_{\tau^{p^r}, 0}=\big\{ x\in D_{\tau} ;\ (\gamma\otimes 1)x=(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots + \tau_D^{p^{r}(\chi(\gamma)-1)})(x) \big\}.\]
\end{notation}
\begin{definition}\label{def complex phi tau n normal version}
Let $(D, D_{\tau})\in \Mod_{F_0, F_{\tau}}(\varphi,\tau^{p^r}).$ We define a complex $\mathcal{C}_{\varphi, \tau^{p^r}}(D)$\index{$\mathcal{C}_{\varphi, \tau^{p^r}}(D)$} as follows:
\[ \xymatrix{
0\ar[rr] && D \ar[r] & D\oplus D_{ \tau^{p^r}, 0} \ar[r] & D_{\tau^{p^r}, 0}\ar[r] & 0\\
&&x\ar@{|->}[r] &((\varphi-1)(x), (\tau^{p^r}-1)(x))&&\\
&&& (y, z) \ar@{|->}[r] &(\tau^{p^r}-1)(y)-(\varphi-1)(z).&}
\]
If $T\in \Rep_{\FF_p}(\mathscr{G}_K),$ we have in particular the complex $\mathcal{C}_{\varphi, \tau^{p^r}}(\mathcal{D}(T)),$ which will also be simply denoted $\mathcal{C}_{\varphi, \tau^{p^r}}(T).$\index{$\mathcal{C}_{\varphi, \tau^{p^r}}(T)$}
\end{definition}
\begin{proposition}\label{prop tau-puissance complex normal version}
The complex $\mathcal{C}_{\varphi, \tau^{p^r}}(T)$ computes the continuous Galois cohomology $\H^i(\mathscr{G}_{K_r}, T)$ for $i\in \NN.$
\end{proposition}
\begin{proof}
This follows from theorem \ref{thm main result} by replacing $K$ by $K_r$ ($\cf$ \cite[Remarque 1.15]{Car13}).
\end{proof}
\begin{corollary}\label{coro precise computation of map between different tau power -1}
For any $r\in \NN,$ we have the following morphism of complexes:
\[
\xymatrix{
\Cc_{\varphi,\tau^{p^r}}\colon & 0\ar[r] & D \ar[r] \ar@{=}[d] & D\oplus D_{\tau^{p^r},0} \ar[r] \ar[d]^{\id\oplus \frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}} & D_{\tau^{p^r}, 0}\ar[r] \ar[d]^{\frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}} & 0 \\
\Cc_{\varphi,\tau^{p^{r+1}}}\colon & 0\ar[r] & D \ar[r] & D\oplus D_{\tau^{p^{r+1}}, 0} \ar[r]& D_{\tau^{p^{r+1}}, 0}\ar[r] & 0.
} \]
\end{corollary}
\begin{proof}
This follows from direct computations. Recall that for any $g\in \mathscr{G}_K$ and $T\in \Rep_{\ZZ_p}(\mathscr{G}_K),$ the action $g\otimes 1$ over $\mathcal{D}(T)_{\tau}$ is induced by that of $g\otimes g$ over $\W(C^{\flat})\otimes_{\ZZ_p}T.$ For any $x\in D_{\tau^{p^r},0}:$ we have $(\gamma\otimes 1)x=\big(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots + \tau_D^{p^{r}(\chi(\gamma)-1)}\big)(x).$ Now we verify that $y:=\frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}(x)$ is in $D_{\tau^{p^{r+1}}, 0}.$ Indeed, by the relation $\gamma \tau=\tau^{\chi(\gamma)}\gamma$ we have
\begin{align*}
\gamma(y)&=\gamma\Big(\frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}\Big)(x)\\
&=\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{\chi(\gamma)p^r}-1}(\gamma(x))\\
&=\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{\chi(\gamma)p^r}-1}\big(\big(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots + \tau_D^{p^{r}(\chi(\gamma)-1)}\big)(x)\big)\\
&=\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{\chi(\gamma)p^r}-1}\cdot\frac{\tau_D^{\chi(\gamma)p^r}-1}{\tau_D^{p^r}-1}(x)\\
&=\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{p^r}-1}(x).
\end{align*}
On the other hand
\begin{align*}
&\big(1+\tau_D^{p^{r+1}}+\tau_D^{2p^{r+1}}+\cdots + \tau_D^{p^{r+1}(\chi(\gamma)-1)}\big)(y)\\
=&\big(1+\tau_D^{p^{r+1}}+\tau_D^{2p^{r+1}}+\cdots + \tau_D^{p^{r+1}(\chi(\gamma)-1)}\big) \Big(\frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}\Big)(x)\\
=&\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{p^{r+1}}-1}\cdot \frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}(x)\\
=&\frac{\tau_D^{\chi(\gamma)p^{r+1}}-1}{\tau_D^{p^{r}}-1}(x).\\
\end{align*}
This shows that
\[\gamma(y)=\big(1+\tau_D^{p^{r+1}}+\tau_D^{2p^{r+1}}+\cdots + \tau_D^{p^{r+1}(\chi(\gamma)-1)}\big)(y), \]
and we conclude that $y\in D_{\tau^{p^{r+1}}, 0}.$
\end{proof}
\begin{lemma}\label{lemm map for different tau powers normal version}
Let $n, m\in \NN$ with $n\leq m.$ There is a natural map $\frac{\tau_D^{p^{m}}-1}{\tau_D^{p^{n}}-1} \colon D_{\tau^{p^n},0}\to D_{\tau^{p^{m}}, 0}.$
\end{lemma}
\begin{proof}
By direct computations as in the proof of corollary \ref{coro precise computation of map between different tau power -1}.
\end{proof}
\subsubsection{The \texorpdfstring{$(\varphi, \tau^{p^r})$}{(phi, tau p powers)}-modules over partially unperfected coefficients}
Similarly, we have results for $(\varphi, \tau^{p^r})$-modules over partially unperfected coefficients.
\begin{definition}\label{def (phi, tau_p^r)-modules}
Let $r\in \NN$ and then a \emph{$(\varphi, \tau^{p^r})$-module over $(F_0, F_{u, \tau})$} is the data:
\item (1) an \'etale $\varphi$-module $D$ over $F_0$;
\item (2) a $\tau^{p^r}$-semilinear map $\tau_D^{p^r}$ on $D_{u, \tau}:=F_{u, \tau}\otimes_{F_0}D$ which commutes with $\varphi_{F_{u, \tau}}\otimes \varphi_D$ (where $\varphi_{F_{u, \tau}}$ is the Frobenius map on $F_{u, \tau}$ and $\varphi_D$ the Frobenius map on $D$) and such that
\[ (\forall x\in M) \quad (g\otimes 1)\circ \tau_D^{p^r}(x)=(\tau_D^{p^r})^{\chi(g)}(x), \]
for all $g\in \mathscr{G}_{K_{\pi}}/\mathscr{G}_L$ such that $\chi(g)\in\NN.$\\
We denote $\Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r})$\index{$\Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r})$} the corresponding category.
\end{definition}
\begin{remark}
We have an equivalence of categories between $\Rep_{\FF_p}(\mathscr{G}_{K_r})$ and $\Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r})$ ($\cf$ \cite[Remark 1.15]{Car13}).
\end{remark}
\begin{notation}\label{not D u tau }
Let $(D, D_{u, \tau})\in \Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r}).$ We put
\[D_{u, \tau^{p^r}, 0}:=\big\{ x\in D_{u, \tau};\ (\forall g\in \mathscr{G}_{K_{\pi}})\ \chi(g)\in \ZZ_{>0} \Rightarrow (g\otimes 1)(x)=x+\tau_D^{p^r}(x)+\tau_D^{2p^r}(x)+\cdots +\tau_D^{p^r(\chi(g)-1)}(x) \big\}. \]\index{$D_{u, \tau^{p^r}, 0}$}
By similar arguments as that of lemma \ref{Replace g witi gamma}, we see that
\[D_{u, \tau^{p^r}, 0}=\big\{ x\in D_{u,\tau} ;\ (\gamma\otimes 1)x=(1+\tau_D^{p^r}+\tau_D^{2p^r}+\cdots + \tau_D^{p^{r}(\chi(\gamma)-1)})(x) \big\}.\]
\end{notation}
\begin{definition}\label{def complex phi tau n}
Let $(D, D_{u, \tau})\in\Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r}).$ We define a complex $\mathcal{C}^{u}_{\varphi, \tau^{p^r}}(D)$\index{$\mathcal{C}^{u}_{\varphi, \tau^{p^r}}(D)$} as follows:
\[ \xymatrix{
0\ar[rr] && D \ar[r] & D\oplus D_{u, \tau^{p^r}, 0} \ar[r] & D_{u, \tau^{p^r}, 0}\ar[r] & 0\\
&&x\ar@{|->}[r] &((\varphi-1)(x), (\tau^{p^r}-1)(x))&&\\
&&& (y, z) \ar@{|->}[r] &(\tau^{p^r}-1)(y)-(\varphi-1)(z).&}
\]
If $T\in \Rep_{\FF_p}(\mathscr{G}_K),$ we have in particular the complex $\mathcal{C}^u_{\varphi, \tau^{p^r}}(\mathcal{D}(T)),$ which will also be simply denoted $\mathcal{C}^u_{\varphi, \tau^{p^r}}(T).$\index{$\mathcal{C}^u_{\varphi, \tau^{p^r}}(T)$}
\end{definition}
\begin{proposition}\label{prop tau-puissance complex}
The complex $\mathcal{C}^{u}_{\varphi, \tau^{p^r}}(T)$ computes the continuous Galois cohomology $\H^i(\mathscr{G}_{K_r}, T)$ for $i\in \NN.$
\end{proposition}
\begin{proof}
This follows from theorem \ref{coro complex over non-perfect ring works also Zp-case} replacing $K$ by $K_r.$
\end{proof}
\begin{lemma}\label{lemm map for different tau powers}
Let $n, m\in \NN$ with $n<m.$ There is a natural map $D_{u, \tau^{p^n},0}\xrightarrow{\frac{\tau_D^{p^{m}}-1}{\tau_D^{p^{n}}-1}} D_{u, \tau^{p^{m}}, 0}.$
\end{lemma}
\begin{proof}
By direct computations similar as in corollary \ref{coro precise computation of map between different tau power -1}.
\end{proof}
\begin{corollary}
There is a morphism between complexes:
\[
\xymatrix{
\Cc^{u}_{\varphi,\tau^{p^r}}\colon & 0\ar[r] & D \ar[r] \ar@{=}[d] & D\oplus D_{u, \tau^{p^r},0} \ar[r] \ar[d]^{\id\oplus \frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}} & D_{u, \tau^{p^r}, 0}\ar[r] \ar[d]^{\frac{\tau_D^{p^{r+1}}-1}{\tau_D^{p^{r}}-1}} & 0 \\
\Cc^{u}_{\varphi,\tau^{p^{r+1}}}\colon & 0\ar[r] & D \ar[r] & D\oplus D_{u, \tau^{p^{r+1}}, 0} \ar[r]& D_{u, \tau^{p^{r+1}}, 0}\ar[r] & 0.
} \]
\end{corollary}
\begin{proof}
This follows from direct computations similar as in corollary \ref{coro precise computation of map between different tau power -1}.
\end{proof}
\begin{remark}
In subsection \ref{section 3.4 phi tau}, $(\varphi, \tau^{p^r})$-modules with $r\in\NN$ are only defined for $\Rep_{\FF_p}(\mathscr{G}_K).$ One can easily define the categories $\Mod_{\Oo_{\Ee}, \Oo_{\Ee_{\tau}}}(\varphi,\tau^{p^r}),$\index{$\Mod_{\Oo_{\Ee}, \Oo_{\Ee_{\tau}}}(\varphi,\tau^{p^r})$} $\Mod_{\Oo_{\Ee}, \Oo_{\Ee_{u,\tau}}}(\varphi,\tau^{p^r}),$\index{$\Mod_{\Oo_{\Ee}, \Oo_{\Ee_{u,\tau}}}(\varphi,\tau^{p^r})$} $\Mod_{\Ee, \Ee_{\tau}}(\varphi,\tau^{p^r}),$\index{$\Mod_{\Ee, \Ee_{\tau}}(\varphi,\tau^{p^r})$} $\Mod_{\Ee, \Ee_{u,\tau}}(\varphi,\tau^{p^r}),$\index{$\Mod_{\Ee, \Ee_{u,\tau}}(\varphi,\tau^{p^r})$} and generalize the related notions to $\Rep_{\ZZ_p}(\mathscr{G}_K)$ and $\Rep_{\QQ_p}(\mathscr{G}_K).$ The results given for $\Mod_{F_0, F_{\tau}}(\varphi,\tau^{p^r})$ and $\Mod_{F_0, F_{u, \tau}}(\varphi,\tau^{p^r})$ hold similarly for the other categories.
\end{remark} | {"config": "arxiv", "file": "2201.11235/Chapter3.tex"} |
\begin{document}
\title{Stabilization of Second Order Nonlinear Equations with Variable Delay}
\author{Leonid Berezansky$^{\rm a}$,
Elena Braverman$^{\rm b}$$^{\ast}$\thanks{$^\ast$Corresponding author. Email: [email protected]}
and Lev Idels$^{\rm c}$ \\ \vspace{6pt}
$^{a}${\em{Dept. of Math, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel}};\\
$^{b}${\em{Dept. of Math \& Stats, Univ. of Calgary, 2500 University Dr. NW, Calgary, AB, Canada T2N1N4}};\\
$^{c}${\em{Dept. of Math, Vancouver Island University (VIU), 900 Fifth St. Nanaimo, BC, Canada V9S5J5}} \\
\received{received April 2014} }
\maketitle
\begin{abstract}
For a wide class of second order nonlinear non-autonomous models, we illustrate
that combining proportional state control with the feedback that is proportional to the derivative of the chaotic signal,
allows to stabilize unstable motions of the system.
The delays are variable, which leads to more flexible controls permitting
delay perturbations; only delay bounds are significant for stabilization by a delayed control.
The results are applied to the sunflower equation which has
an infinite number of equilibrium points.
\end{abstract}
\begin{keywords} non-autonomous second order delay differential equations;
stabilization; proportional feedback; a controller with damping;
derivative control; sunflower equation
\end{keywords}
\section{Introduction}
It is well known that control of dynamical systems is a classical subject in engineering sciences.
Time delayed feedback control is an efficient method for stabilizing unstable periodic orbits of
chaotic systems which are described by second order delay differential equations, see
\cite{Bo,Do,Fr1,Fr2,Kim,Liu,Reit, Gabor2,Gabor1,Wan}.
When introducing a control, we assume that the chosen equilibrium of an equation is unstable,
and the controller will transform the unstable equation into an asymptotically or exponentially stable equation.
Instability tests for some autonomous delay models of the second order could be found, for example,
in \cite{Cah}.
Two basic proportional (adaptive) control models are widely used:
standard feedback controllers $u(t)=K[x(t)-x^{\ast}]$ with
the controlling force proportional to the deviation of the system from the attractor,
where $x^{\ast}$ is an equilibrium of the equation,
and the delayed feedback control $u(t)=K[x(t-\tau(t))-x(t)]$, see
\cite{Boccal,Joh,Ko}.
Proportional control fails if there exist rapid changes to the system that come from an external source,
and to keep the system steady under an abrupt change, a derivative control was used in \cite{ Bela, Reit, Vyh}, i.e.
$u(t)=\beta \frac{d}{dt}e(t)$,
where, for example, $e(t)=x(t-\tau)-x(t)$ or $e(t)=x(t)-x^{\ast}$.
In electronics, a simple operational amplifier differentiator
circuit will generate the continuous feedback signal which is proportional to the time derivative
of the voltage across the negative resistance, see \cite{Joh}.
A classical proportional control does not stabilize
even linear ordinary differential equations; e.g. the equation $\ddot{x}=u(t)$ with the control
$u(t)=K[x(t-\tau(t))-x(t)]$ is not asymptotically stable for any $K$, since any constant is a solution of
this equation. The pure derivative control $u(t)=-\lambda\dot{x}(t)$ also does not stabilize all second order differential equations.
For example, the equation $\ddot{x}+ax(t)-ax(t-\tau)=u(t)$ with the control
$u(t)=-\lambda\dot{x}(t)$ is not asymptotically stable for any control since any constant is a solution of this equation.
Some interesting and novel results could be found in \cite{Ren,Ru,Si,Saberi2,Yan}.
For a linear non-autonomous model
$\dot{x}=A(t)x(t)+B(t) u(t)$ the effective multiple-derivative feedback controller
$\displaystyle u(t)=\sum_{i=0}^{M-1} K_{i}x^{(i)}(t-h)$ was introduced in \cite{Saberi1},
and a special transformation was used to transform neutral-type DDE into a retarded DDE.
However, most of second order applied models are nonlinear, even the original pendulum equation.
The main focus of the paper is the control of nonlinear delay equations, some real world models are considered
in Examples~\ref{ex2},\ref{ex3},\ref{additional_example}.
In the present paper we study a nonlinear second order delay differential equation
\begin{equation}\label{3}
\ddot{x}(t)+\sum_{k=1}^l f_k(t,\dot{x}(g_k(t)))+\sum_{k=1}^m s_k(t,x(h_k(t)))=u(t),~~t \geq t_0,
\end{equation}
with the input or the controller $u(t)$, along with its linear version
\begin{equation}\label{1}
\ddot{x}(t)+\sum_{k=1}^l a_k(t)\dot{x}(g_k(t))
+\sum_{k=1}^m b_k(t)x(h_k(t))=u(t), ~~t \geq t_0.
\end{equation}
Both equations (\ref{1}) and (\ref{3}) satisfy for any $t_0\geq 0$ the initial condition
\begin{equation}\label{2}
x(t)=\varphi(t), ~\dot{x}(t)=\psi(t), ~t\leq t_0.
\end{equation}
We will assume that the initial value problem has a unique global solution on $[t_0,\infty)$
for all nonlinear equations considered in this paper, and
the following conditions are satisfied:
\\
(a1) $a_i, b_j$ are Lebesgue measurable and essentially bounded on $[0,\infty)$ functions,
$i=1,\dots,l$, $j=1,\dots,m $, which allows to define essential eventual limits
\begin{equation}\label{6}
\alpha=\limsup_{t\rightarrow\infty}\sum_{k=1}^l |a_k(t)|,~\beta=\limsup_{t\rightarrow\infty}\sum_{k=1}^m |b_k(t)|;
\end{equation}
(a2) $h_j, g_i$ are Lebesgue measurable functions,
$
h_i(t)\leq t, g_i(t)\leq t,$ $\lim\limits_{t\to\infty} h_i(t)=\infty$, $\lim\limits_{t\to\infty}
g_i(t)=\infty$, $i=1,\dots,l$, $j=1,\dots,m $.
The paper is organized as follows.
In Section 2 we design a stabilizing damping control
$u(t)=\lambda_1 \dot{x}(t)+\lambda_2 (x(t)-x^{*})$ {\em for any linear non-autonomous equation} (\ref{1}).
Under some additional condition on the functions $f_k$ and $s_k$, such control also stabilizes equations of type (\ref{3}). The
results are based on stability tests recently obtained in \cite{BBD, BBI}
for second order non-autonomous differential equations.
We also prove in Section 2 that a strong enough controlling force, depending on the derivative and the present
(and past) positions, can globally stabilize an equilibrium of the controlled equation.
In Section 3 classical proportional delayed feedback controller $u(t)=K[x(t-\tau(t))-x(t)]$ is applied
to stabilize a certain class of second order delay equations with a single delay involved in the state term only.
We develop tailored feedback controllers and justify their application both analytically and numerically.
\section{Damping Control}
We will use auxiliary results recently obtained in \cite{BBD,BBI}.
\begin{Lem}\label{lemma1}\cite[Corollary 3.2]{BBD}
If $a>0, b>0,$
\begin{equation}\label{5}
4b>a^2, ~~\frac{2(a+\sqrt{4b-a^2})}{a\sqrt{4b-a^2}}\alpha+
\frac{4}{a\sqrt{4b-a^2}}\beta<1,
\end{equation}
where $\alpha$ and $\beta$ are defined in (a1) by (\ref{6}),
then the zero solution of the equation
\begin{equation}\label{4}
\ddot{x}(t)+a\dot{x}(t)+bx(t)+\sum_{k=1}^l a_k(t)\dot{x}(g_k(t))
+\sum_{k=1}^m b_k(t)x(h_k(t))=0
\end{equation}
is globally exponentially stable.
\end{Lem}
\begin{Lem}\label{lemma2}\cite{BBI}
Assume that the equation
\begin{equation}\label{19}
\ddot{x}(t)+f(t,x(t),\dot{x}(t))+s(t,x(t))+\sum_{k=1}^m s_k(t,x(t),x(h_k(t)))=0
\end{equation}
possesses a unique trivial equilibrium, where
$f(t,v,0)=0$, $s(t,0)=0$, $s_k(t,v,0)=0$, \\ $0<a_0\leq \frac{f(t,v,u)}{u}\leq A$,
$\displaystyle 0<b_0\leq \frac{s(t,u)}{u}\leq B$,
$\displaystyle \left| \frac{s_k(t,v,u)}{u} \right| \leq C_k, u\neq 0,~t-h_k(t)\leq \tau$.
If at least one of the conditions \\
1) $\displaystyle B\leq \frac{a_0^2}{4},~ \sum_{k=1}^m C_k<b_0-\frac{a_0}{2}(A-a_0)$, ~~~~~~
2) $\displaystyle b_0\geq \frac{a_0}{2}\left(A-\frac{a_0}{2}\right),~ \sum_{k=1}^m C_k<\frac{a_0^2}{2}-B$\\
holds, then zero is a global attractor for all solutions of equation~(\ref{19}).
\end{Lem}
We start with linear equations.
Stabilization results for linear systems were recently obtained in \cite{Saberi1,Saberi2}.
Unlike \cite{Saberi1,Saberi2}, the following theorem considers models with variable delays, however,
the control is not delayed.
\begin{Th}\label{th1}\label{cor1}
For any $\delta\in (0,2)$, $\alpha$ and $\beta$ defined by (\ref{6}) and
\begin{equation}\label{8}
\lambda > \mu(\lambda) := \frac{(\delta+\sqrt{4-\delta^2}) \alpha+\sqrt{(\delta+\sqrt{4-\delta^2})^2\alpha^2
+4\sqrt{4-\delta^2}\beta\delta}}{\delta \sqrt{4-\delta^2}}\, ,
\end{equation}
equation (\ref{1}) with the control $u(t)=-\delta \lambda \dot{x}(t)-\lambda^2 x(t)$
is exponentially stable.
\end{Th}
\begin{proof}
Equation (\ref{1}) with the control
\begin{equation}\label{7a}
\ddot{x}(t)+\sum_{k=1}^l a_k(t)\dot{x}(g_k(t))
+\sum_{k=1}^m b_k(t)x(h_k(t))=-\delta \lambda\dot{x}(t)-\lambda^2 x(t)
\end{equation}
has the form of (\ref{4}) with $a=\delta\lambda$ and $b=\lambda^2$.
Then the inequalities in (\ref{5}) have the form
\begin{equation}\label{9}
4\lambda^2>\delta^2\lambda^2 \mbox{~~and~~}
\frac{2(\delta+\sqrt{4-\delta^2})}{\delta\lambda\sqrt{4-\delta^2}}\alpha+\frac{4}{\delta\lambda^2\sqrt{4-\delta^2}}\beta<1.
\end{equation}
The first inequality in (\ref{9}) holds as $\delta\in (0,2)$,
and the second one is equivalent to
\begin{equation}\label{10}
\delta\lambda^2\sqrt{4-\delta^2} -2 \left( \delta+\sqrt{4-\delta^2} \right) \alpha\lambda-4\beta>0.
\end{equation}
Condition (\ref{8}) implies (\ref{10}), which completes the proof.
\end{proof}
\begin{Cor}\label{cor2a}
Let $\displaystyle \mu(\delta_0)=\min_{\delta\in [\varepsilon,2-\varepsilon]} \mu(\delta)$ for some $\varepsilon>0$,
where $\mu(\delta)$ is defined in (\ref{8}).
Then for $\lambda > \mu(\delta_0)$ equation (\ref{1}) with the control $u(t)=-\delta_0 \lambda\dot{x}(t)-\lambda^2 x(t)$
is exponentially stable.
\end{Cor}
For $\delta=\sqrt{2}$
Theorem~\ref{th1} yields the following result.
\begin{Cor}\label{cor2}
Eq. (\ref{1}) with the control
$u(t)=-\sqrt{2}\lambda \dot{x}(t)-\lambda^2 x(t)$
is exponentially stable if
\begin{equation}\label{12a}
\lambda>\sqrt{2}(\alpha+\sqrt{\alpha^2+\beta}).
\end{equation}
\end{Cor}
\begin{Rem} For any equation (\ref{1}) there exists $\lambda>0$ such that condition (\ref{12a}) holds.
Hence the stabilizing damping control exists for any equation of form (\ref{1}).
\end{Rem}
\begin{Ex}\label{ex1} For the equation
\begin{equation}\label{11}
\ddot{x}(t)+ (\sin t) \dot{x}(g(t))+ (\cos t) x(h(t))=0, ~~h(t)\leq t, ~ g(t)\leq t,
\end{equation}
the upper bounds defined in (\ref{6}) are $\alpha=\beta=1$.
Hence, as long as $\lambda>2+\sqrt{2}$ in Corollary~\ref{cor2}, equation (\ref{11}) with the control
$u(t)=-\sqrt{2}\lambda \dot{x}(t)-\lambda^2 x(t)$
is exponentially stable.
\end{Ex}
Let us proceed to nonlinear equation (\ref{3}); its stabilization is the main object of the present paper.
For simplicity we consider here nonlinear equations with the zero equilibrium, since the change of the variable
$z=x-x^{\ast}$ transforms an equation with the equilibrium $x^*$ into an equation in $z$ with the zero
equilibrium.
\begin{Th}\label{th2}
Suppose $f_k(t,0)=s_k(t,0)=0$,
\begin{equation}\label{13a}
\displaystyle \left|\frac{f_k(t,u)}{u}\right|\leq a_k(t),~\left|\frac{s_k(t,u)}{u}\right|\leq b_k(t), u\neq 0.
\end{equation}
Then for any $\delta\in (0,2)$, the zero equilibrium of (\ref{3})
with the control $u(t)=-\delta\lambda\dot{x}(t)-\lambda^2 x(t)$
\begin{equation}\label{12}
\ddot{x}(t)+\sum_{k=1}^l f_k(t,\dot{x}(g_k(t)))+\sum_{k=1}^m s_k(t,x(h_k(t)))=-\delta\lambda\dot{x}(t)-\lambda^2 x(t)
\end{equation}
is globally asymptotically stable, provided (\ref{8}) holds with $\alpha$ and $\beta$ defined in (\ref{6}).
\end{Th}
\begin{proof}
Suppose $x$ is a fixed solution of equation ~(\ref{12}).
Equation~(\ref{12}) can be rewritten as
$$
\ddot{x}(t)+\sum_{k=1}^l a_k(t)\dot{x}(g_k(t))
+\sum_{k=1}^m b_k(t)x(h_k(t))=-\delta\lambda\dot{x}(t)-\lambda^2 x(t),
$$
where
$\displaystyle
a_k(t)= \left\{\begin{array}{cc}
\frac{f_k(t,\dot{x}(t))}{\dot{x}(t)},& \dot{x}(t)\neq 0,\\
0,& \dot{x}(t)=0,\end{array}\right. ~~
b_k(t)=\left\{\begin{array}{cc}
\frac{s_k(t,x(t))}{x(t)},& x(t)\neq 0,\\
0,& x(t)=0. \end{array}\right.
$
~~~Hence the function $x$ is a solution of the linear equation
\begin{equation}\label{13}
\ddot{y}(t)+\sum_{k=1}^l a_k(t)\dot{y}(g_k(t))
+\sum_{k=1}^m b_k(t)y(h_k(t))=-\delta\lambda\dot{y}(t)-\lambda^2 y(t),
\end{equation}
which is exponentially stable by Theorem~\ref{th1}.
Thus $\lim\limits_{t\rightarrow\infty}y(t)=0$ for any solution
$y$ of equation (\ref{13}), and since $x$ is a solution of (\ref{13}),
$\lim\limits_{t\rightarrow\infty}x(t)=0.$
\end{proof}
In particular, for $\delta=\sqrt{2}$ condition (\ref{8}) transforms into (\ref{12a}).
\begin{Ex}\label{ex2} Consider the equation
\begin{equation}\label{14}
\ddot{x}(t)+a(t) \dot{x}(g(t))+ b(t)\sin (x(h(t)))=0, ~~h(t)\leq t, ~g(t)\leq t,
\end{equation}
with $|a(t)|\leq \alpha, |b(t)|\leq \beta$.
Equation (\ref{14}) generalizes the sunflower equation introduced by Israelson and Johnson in \cite{isa} as a model
for the geotropic circumnutations of {\em Helianthus annuus}; later it was studied in \cite{alf,liza,somolinos}.
We have $\displaystyle \left|\frac{\sin u}{u}\right|\leq 1, ~u\neq 0$; hence if condition (\ref{12a}) holds
for $\alpha=\limsup\limits_{t\rightarrow\infty} |a(t)|$ and $\beta=\limsup\limits_{t\rightarrow\infty} |b(t)|$,
then the zero equilibrium of equation (\ref{14}) with the control
$u(t)=-\sqrt{2}\lambda \dot{x}(t)-\lambda^2 x(t)$ in the right-hand side
is globally exponentially stable.
Equation (\ref{14}) has an infinite number of equilibrium points
$x^{\ast}=\pi k$, $k=0,1,\dots$. To stabilize a fixed equilibrium $x^{\ast}=\pi k$
we apply the controller $u(t)=-\sqrt{2}\lambda \dot{x}(t)-\lambda^2 (x(t)-x^{\ast})$.
For example, consider the sunflower equation
$$
\ddot{x}(t)+ \dot{x}(t)+ 2 \sin (x(t-\pi))=0
$$
with various initial conditions $x(0)=6, 3, 0.1$, where $x(t)$ is constant for $t\leq 0$,
$x'(0)=1$ which has chaotic solutions (see Fig.~\ref{figure2}, left).
Application of the controller $u(t)=-\lambda \delta\dot{x}(t)- \lambda^2 [x(t)-\pi]$,
where $\delta=\sqrt{2}$ and $\lambda > \sqrt{2}+\sqrt{6}$, for example, $\lambda=4$,
stabilizes the otherwise unstable equilibrium $x^{\ast}=\pi$, as illustrated in Fig.~\ref{figure2}, right.
\begin{figure}[ht]
\centering
\vspace{-15mm}
\includegraphics[scale=0.38]{nopicontrol} \hspace{-6mm}
\includegraphics[scale=0.38]{picontrol}
\vspace{-28mm}
\caption{Stabilization of the equilibrium of the sunflower equation
$\ddot{x}(t)+\dot{x}+2 \sin(x(t-\pi))=0$ with various initial conditions.
The left graph illustrates unstable (chaotic) solutions while in the right graph,
corresponding to the sunflower model with the control $u(t)=-\lambda \delta\dot{x}(t)- \lambda^2 [x(t)-\pi]$,
all three solutions of the controlled equation converge to the equilibrium $\pi$.
}
\label{figure2}
\end{figure}
\end{Ex}
\section{Classical proportional control}
In this section we investigate stabilization with the standard proportional delayed control.
Consider the equation
\begin{equation}\label{1a}
\ddot{x}(t)+a\dot{x}(t)+bx(h(t))=f(t,x(g(t)))
\end{equation}
which has an equilibrium $x^{\ast}$.
The equation
\begin{equation}
\label{1ab}
\ddot{x}(t)+a\dot{x}(t)+bx(h(t))=f(t,x(g(t)))+u(t)
\end{equation}
with the control $u(t)=-b[x(t)-x(h(t))]$ has
the same equilibrium as (\ref{1ab})
and can be rewritten as
\begin{equation}\label{20}
\ddot{x}(t)+a\dot{x}(t)+bx(t)=f(t,x(g(t))).
\end{equation}
After the substitution
$x=y+x^{\ast}$ into equation (\ref{20}) we obtain
\begin{equation}\label{21}
\ddot{y}(t)+a\dot{y}(t)+by(t)=p(t,y(g(t))),
\end{equation}
with $p(t,v)=f(t,v+x^{\ast})-bx^{\ast}$, where (\ref{21}) has the zero equilibrium.
\begin{Th}\label{th3}
Suppose $\left|f(t,v+x^{\ast})-bx^{\ast} \right|\leq C |v|$ for any $t$ and at least one of the following conditions
\\
a) $C<b\leq a^2/4$; ~~
b) $a^2/4\leq b <a^2/2-C$; ~~
c) $C< a\sqrt{4b-a^2}/4$
\\
holds.
Then the equilibrium $x^{\ast}$ of equation (\ref{1a})
with the control $u(t)=-b[x(t)-x(h(t))]$ is globally
asymptotically stable.
\end{Th}
\begin{proof}
Statements a) and b) of Theorem~\ref{th3} are direct corollaries of Lemma~\ref{lemma2}.
To prove Part c) suppose that $x$ is a solution of equation (\ref{21}). Equation (\ref{21}) can be rewritten in the form
\begin{equation}\label{21a}
\ddot{x}(t)+a\dot{x}(t)+bx(t)=P(t)x(g(t)),
\end{equation}
where ~
$\displaystyle
P(t)=\left\{\begin{array}{cc}
\frac{p(t,x(t))}{x(t)},& x(t)\neq 0,\\
0,& x(t)=0. \end{array}\right.
$~~
Hence the function $x$ is a solution of the linear equation
\begin{equation}\label{21b}
\ddot{y}(t)+a\dot{y}(t)+by(t)=P(t)y(g(t)).
\end{equation}
If $\alpha=0, \beta =C$, then condition c) of the theorem coincides with condition
(\ref{5}) of Lemma~\ref{lemma1}. Hence by Lemma~\ref{lemma1} equation (\ref{21b})
is exponentially stable, i.e.
for any solution $y$ of this equation
we have $\limsup\limits_{t\rightarrow\infty} y(t)=0$. Hence for a fixed solution $x$
of equation (\ref{21a}) we also have $\limsup\limits_{t\rightarrow\infty} x(t)=0$.
\end{proof}
Let us examine a popular model
\begin{equation}\label{21abcd}
\ddot{x}(t)+a\dot{x}(t)+bx(h(t))=F(x(g(t)),
\end{equation}
where $F$ is either monotone or non-monotone feedback.
Its applications include the
neuromuscular regulation of movement and posture,
acousto-optical bistability, metal cutting, the cascade
control of fluid level devices and the electronically
clamped pupil light \cite{Cam}.
\begin{Ex}
\label{ex3}
Consider the special case of (\ref{21abcd})
\begin{equation}\label{1A}
\ddot{x}(t)+a\dot{x}(t)+bx(h(t))=\frac{d(t)|x(g(t))|^{m+1}}{1+|x(g(t))|^n},
\end{equation}
where $ 0\leq m<n, |d(t)|\leq d_0.$
Denote
~~ $\displaystyle
\mu=\sup_{v\geq 0}\frac{v^{m}}{1+v^n}=\left\{\begin{array}{ll}
1,& m=0,\\
\displaystyle \frac{m^{m/n}}{n(n-m)^{m/n-1}},& m>0.
\end{array}\right.
$ \\
If the conditions of Theorem~\ref{th3} hold with $C=\mu d_0$ then the zero equilibrium of
equation (\ref{1A}) with the control $u(t)=-b[x(t)-x(h(t))]$ is globally asymptotically stable.
\end{Ex}
\begin{Ex}
\label{additional_example}
Consider the particular case of (\ref{21abcd})
\begin{equation}\label{1abc}
\ddot{x}(t)+ 2 \dot{x}(t)+ x(h(t))=\frac{0.8 x(g(t))}{1+x^{n}(g(t))},
\end{equation}
where $n \geq 6$. As can be easily verified, the range of the function $f(x)=1.6 x/(1+x^n)$ includes $[-1,1]$ for $n \geq 6$.
Let us demonstrate that for a certain choice of $h(t)$ and $g(t)$ the function $x(t) =\sin(t/4)$
is a solution. We restrict ourselves to the interval $[0,8\pi]$, and then extend it in such a way that
both $t-h(t)$ and $t-g(t)$ are periodic with a period $8\pi$. We can find $h(t)\in [0,t]$ such that
$\sin(h(t)/4)= \frac{1}{16} \sin(t/4)$, since $\sin(0)=0$, and the continuous function takes all its values between zero and $\sin(t/4)$.
As mentioned above, the function $y=1.6 u/(1+u^{n})$ takes all the values $y\in [-1,1]$ for $u\in [-1,1]$,
and $\cos(x/4)$ takes all the values between -1 and 1 for $x\in [-4\pi, t]$, there is $g(t)$ such that
$\displaystyle \frac{1}{2} \cos \left( \frac{t}{4} \right)=\frac{0.8 \sin(g(t)/4)}{1+\sin^{n}(g(t)/4)}$ and $g(t) \in [-4\pi, t]$.
Then $x(t) =\sin(t/4)$ is a solution of (\ref{1abc}) on $[0,4\pi]$, with the same initial function on $[-4\pi,0]$.
Further, we extend $h(t+8\pi)=h(t)+8\pi$, $g(t+8\pi)=g(t)+8\pi$ and obtain that $x(t) =\sin(t/4)$ is a solution of (\ref{1abc}),
$t \geq 0$, with $\varphi(t)=\sin(t/4)$, $t\leq 0$, and a bounded (by $16\pi$) delay.
Hence, equation (\ref{1abc}) is not asymptotically stable.
Equation (\ref{1abc}) with control $u=-(x(t)-x(h(t)))$ becomes ~~
$\displaystyle
\ddot{x}(t)+ 2 \dot{x}(t)+ x(t)=\frac{0.8 x(g(t))}{1+x^n(g(t))}$,
and it is globally asymptotically stable by Theorem~\ref{th3}, Part a), since $C=0.8<b=1=a^2/4$.
Fig.~\ref{figure3} numerically illustrates the results for the constant delay $g(t)=t-\tau$.
\end{Ex}
\begin{figure}[ht]
\centering
\vspace{-15mm}
\includegraphics[scale=0.38]{nopcontrol} \hspace{-6mm}
\includegraphics[scale=0.38]{pcontrol.pdf}
\vspace{-28mm}
\caption{Stabilization with a proportional control for the equation
$\ddot{x}(t)+2\dot{x}+x(t-\tau))=\frac{0.8 x(t-\tau)}{1+x^{8}(t-\tau)}$ here $\tau=10$ .
The left graph illustrates an unstable (oscillating and unbounded) solution while in the right graph,
the control $u(t)=x(t-\tau)-x(t)$ produces a stable trajectory.
}
\label{figure3}
\end{figure}
Consider the nonlinear equation
\begin{equation}\label{2a}
\ddot{x}(t)+a\dot{x}(t)+f(t,x(h(t)))=0
\end{equation}
which has an equilibrium $x(t)=x^{\ast}$.
For stabilization we will use the controller $u=-K[x(t)-x^{\ast}]$, $K>0$
and obtain the equation
\begin{equation}\label{3a}
\ddot{x}(t)+a\dot{x}(t)+f(t,x(h(t)))=-K[x(t)-x^{\ast}].
\end{equation}
The substitution of $y(t)=x(t)-x^{\ast}$ into equation (\ref{3a}) yields
\begin{equation}\label{4a}
\ddot{y}(t)+a\dot{y}(t)+p(t,y(h(t)))=-Ky,
\end{equation}
where $p(t,v)=f(t,v+x^{\ast})$.
\begin{theorem}\label{th4}
Suppose $\displaystyle \left| f(t,v+x^{\ast}) \right| \leq C |v|$, and
at least one of the conditions holds:
\\
a) $C<K\leq a^2/4$;
~~
b) $a^2/4\leq K<a^2/2-C$;
~~
c) $ C< a\sqrt{4K-a^2}/4$.
Then the equilibrium $x^{\ast}$ of equation (\ref{2a}) with the control
$u=-K(x(t)-x^{\ast})$ is globally asymptotically stable.
\end{theorem}
\begin{proof}
Equation (\ref{4a}) has the form
$
\ddot{y}(t)+a\dot{y}(t)+Ky=-p(t,y(h(t))),
$
and application of Lemmas~\ref{lemma1} and \ref{lemma2} concludes the proof.
\end{proof}
To illustrate application of Theorem \ref{th4}, consider the sunflower equation
\begin{equation}\label{5a}
\ddot{x}(t)+a\dot{x}(t)+A\sin (\omega x(h(t)))=0,~~ a,A,\omega>0.
\end{equation}
This equation has an infinite number of unstable equilibrium points $x=\frac{(2k+1)\pi}{\omega}$, $k=0,1,\dots$, see \cite{BBI}.
To stabilize a fixed equilibrium $x^*=\frac{(2k+1)\pi}{\omega}$ of equation \eqref{5a}, we choose the
controller $u=-K\left[x(t)-\frac{(2k+1)\pi}{\omega}\right], K>0$, i.e.
\begin{equation}\label{6a}
\ddot{x}(t)+a\dot{x}(t)+A\sin (\omega x(h(t)))=-K\left[x(t)-\frac{(2k+1)\pi}{\omega}\right].
\end{equation}
Since $\displaystyle \left|A\sin (\omega v)\right|\leq A\omega |v|$, Theorem~\ref{th4} implies the following result.
\begin{Cor}\label{cor5}
Suppose at least one of the conditions holds:
\\
a) $A\omega<K\leq a^2/4$;
~~
b) $a^2/4\leq K<a^2/2-A\omega$;
~~
c) $A\omega< a \sqrt{4K-a^2}/4$.
Then the equilibrium $x^{\ast}=\frac{(2k+1)\pi}{\omega}$ of equation (\ref{6a})
is globally asymptotically stable.
\end{Cor}
\section{Summary}
The results of the paper can be summarized as follows:
\begin{enumerate}
\item
For a wide class of nonlinear delay second order equations,
we developed stabilizing controls combining the proportional feedback with
the proportional derivative feedback.
\item
We designed a standard feedback controller which allows to stabilize a second order nonlinear
equation with a linear nondelay damping term.
\end{enumerate}
The results are illustrated using nonlinear models with several equilibrium points, for example, modifications of the
sunflower equation.
\subsection{Acknowledgments}
L. Berezansky was partially supported by Israeli Ministry of Absorption,
E. Braverman was partially supported by the NSERC DG Research Grant,
L. Idels was partially supported by a grant from VIU.
The authors are very grateful to the editor and the anonymous referees whose comments significantly contributed to the
better presentation of the
results of the paper. | {"config": "arxiv", "file": "1606.03045/control_formatted_08_01_15.tex"} |
\begin{document}
\title{\Large Properties of the $\epsilon$-Expansion, Lagrange Inversion and Associahedra and the $O(1)$ Model. }
\author{Thomas A. Ryttov$^{\varheartsuit}$}\email{[email protected]}
\affiliation{
$^{\varheartsuit}${ \rm CP}$^{\bf 3}${ \rm-Origins}, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark}
\begin{abstract}
We discuss properties of the $\epsilon$-expansion in $d=4-\epsilon$ dimensions. Using Lagrange inversion we write down an exact expression for the value of the Wilson-Fisher fixed point coupling order by order in $\epsilon$ in terms of the beta function coefficients. The $\epsilon$-expansion is combinatoric in the sense that the Wilson-Fisher fixed point coupling at each order depends on the beta function coefficients via Bell polynomials. Using certain properties of Lagrange inversion we then argue that the $\epsilon$-expansion of the Wilson-Fisher fixed point coupling equally well can be viewed as a geometric expansion which is controlled by the facial structure of associahedra. We then write down an exact expression for the value of anomalous dimensions at the Wilson-Fisher fixed point order by order in $\epsilon$ in terms of the coefficients of the beta function and anomalous dimensions.
We finally use our general results to compute the values for the Wilson-fisher fixed point coupling and critical exponents for the scalar $O(1)$ symmetric model to $O(\epsilon^7)$.
\end{abstract}
\maketitle
\section{Introduction}
The scalar $O(N)$ symmetric theory $(\phi^i\phi^i)^2$ in $d=4-\epsilon$ dimensions is extremely rich in the number of physical phenomena that it can accommodate and describe. For instance for $N=0$ it coincides with the critical behavior of polymers (self-avoiding walks) while for $N=1$ it lies in the same universality class as liquid-vapor transitions and uniaxial magnets (Ising). For $N=2$ the theory describes the superfluid transition of liquid helium close to the $\lambda$-point ($XY$). For $N=3$ it describes the critical behavior of isotropic ferromagnets (Heisenberg). There are many more examples of critical phenomena that are described by the scalar $O(N)$ symmetric model and its close relatives for which the reader can find a more complete discussion in \cite{Pelissetto:2000ek}. It suffices to say that further understanding of the theory is of great importance.
For these reasons much effort has been put into computing various renormalization group functions in $d=4-\epsilon$ dimensions to higher and higher order in the coupling. The three and four loop calculation \cite{Brezin:1976vw,Kazakov:1979ik} as well as the five loop calculation \cite{Gorishnii:1983gp,Chetyrkin:1981jq,Kazakov:1984km,LeGuillou:1985pg,Kleinert:1991rg,Guida:1998bx} long stood as the state-of-the-art of higher loop calculations in $d=4-\epsilon$ dimensions. But recently the six loop calculations initiated in \cite{Batkovich:2016jus,Kompaniets:2016hct} were brought to the end in \cite{Kompaniets:2017yct} for general $N$. In addition an impressive seven loop calculation in the special case of $N=1$ has appeared \cite{Schnetz:2016fhy}. These now constitute the highest loop order calculations of the renormalization group functions which include the beta function, the field anomalous dimension and the mass anomalous dimension.
At sufficiently small $\epsilon$ the one loop beta function possesses a non-trivial zero \cite{Wilson:1971dc}. This is the Wilson-Fisher fixed point. At this fixed point anomalous dimensions and their associated critical exponents are expressed as power series in $\epsilon$. At a fixed point these critical exponents are scheme independent physical quantities. For general $N$ and using the six loop calculations they can be found in analytical form to $O(\epsilon^5)$ in the attached Mathematica file in \cite{Kompaniets:2017yct}. When combining these results with resummation techniques they agree well with experiments \cite{LeGuillou:1977rjt,LeGuillou:1979ixc,LeGuillou:1985pg,Kompaniets:2017yct}. We also note several recent complimentary studies of scalar field theories in various dimensions \cite{Rychkov:2018vya,Hogervorst:2015akt,Rychkov:2015naa,Fei:2014yja,Fei:2014xta,Rattazzi:2008pe,ElShowk:2012ht,El-Showk:2014dwa,Kos:2015mba,Kos:2016ysd,Hellerman:2015nra,Arias-Tamargo:2019xld,Watanabe:2019pdh,Badel:2019oxl,Sen:2015doa,Gliozzi:2017hni,Liendo:2017wsn,Dey:2017oim,Banerjee:2019jpw,Litim:2016hlb,Juttner:2017cpr}.
In this work we do not attempt to extend any of the higher loop calculations. We will instead entertain ourselves by providing explicit and closed form expressions for the fixed point value of the coupling and any anomalous dimension or critical exponent at the Wilson-Fisher fixed point to all orders in $\epsilon$. The closed form all orders expressions are then functions of all the coefficients of the beta function and anomalous dimension (which still need to be calculated). The main feature of the Wilson-Fisher fixed point is the fact that it is an expansion in the first coefficient (which is $\epsilon$) of the beta function in $d=4-\epsilon$. As we will see this allows us to use the Lagrange inversion theorem to formally find the zero of the beta function to all orders in $\epsilon$.
Given the formal power series $f(x) = \sum_{i=1}^{\infty} c_i x^i$ one might ask whether it is possible to find the coefficients $d_i$ in the power series $g(y) = \sum_{i=1}^{\infty} d_i y^i$ where $f$ and $g$ are each others compositional inverses $g(f(x)) = x$ and $f(g(y) ) = y$. The Lagrange inversion theorem provides a procedure for doing to this and for the first few orders one finds $d_1 =1/c_1 $, $d_2=-c_2/c_1^3$, $d_3 = (2c_2^2 - c_1c_3)/c_1^5$, etc. There is also a general closed form expression for $d_i$ at any order which is given in terms of Bell polynomials. This can be found for instance in \cite{MorseFeshbach}.
In our case where we study critical phenomena via the $\epsilon$-expansion it will be more convenient for us to phrase the question of power series inversion in a slightly different but equivalent way. Instead of asking for the compositional inverse of $f(x) = \sum_{i=1}^{\infty} c_i x^i$ we will ask for the zero of $ \tilde{f}(x) $ where $\tilde{f}(x) \equiv -a+f(x)= - a+ \sum_{i=1}^{\infty} c_i x^i$. The zero of $\tilde{f}(x)$ we are looking for should then be given as a power series in the first coefficient $a$ of $\tilde{f}(x)$ and be written as $x = \sum_{i=1}^{\infty}d_i a^i$. Of course one again finds $d_1 = 1/c_1$, $d_2 = - c_2/c_1^3$, $d_3 = (2c_2^2 - c_1c_3)/c_1^5$, etc as above. It is more natural for us to consider Lagrange inversion in this way. Further details will follow below where we will see the coefficients $d_i$ presented in both a combinatoric and geometric language within the physical setting of critical phenomena and the $\epsilon$-expansion.
The use of Lagrange inversion of power series in physics has a beautiful history in celestial mechanics and originates in the study of the restricted Newtonian three-body problem. Here one has to solve a quintic equation $0 = a+\sum_{i=1}^{5} c_i x^i $ in $x$ for three sets of coefficients in order to find the location of the three collinear Lagrange points $L_1$, $L_2$ and $L_3$. Although it is impossible to find general solutions in terms of radicals of a quintic equation it is possible to find a single real solution which is an expansion in the coefficient $a$ as $x = \sum_{i=1}^{\infty} d_i a^i $. One can in principle compute the coefficients $d_i$ to any desired order using the Lagrange inversion theorem. The success of the Lagrange method to find the locations of $L_1$, $L_2$ and $L_3$ is perhaps best illustrated by the fact that in the Earth-Sun system the Solar and Heliospheric Observatory (SOHO) occupies $L_1$ while the Wilkinson Microwave Anisotropy Probe (WMAP) occupies $L_2$.
The paper is organized as follows. In Section \ref{WF} we derive the value of the Wilson-Fisher fixed point coupling in terms of the beta function coefficients to all orders in $\epsilon$ and show that it is given in terms of Bell polynomials. In Section \ref{associahedron} we press on and discuss how the $\epsilon$-expansion of the Wilson-Fisher fixed point coupling also can be understood as a geometric expansion controlled by associaheda. In Section \ref{anodims} we discuss anomalous dimensions while in Section \ref{O(1)} we compute the fixed point coupling and critical exponents to $O(\epsilon^7)$ for the $O(1)$ model. We conclude in Section \ref{conclusions}.
\section{The Wilson-Fisher Fixed Point}\label{WF}
We write the beta function of the coupling $g$ as $\beta(g,\epsilon)$ and in $d=4 - \epsilon$ dimensions it is generally given as a formal power series expansion
\begin{eqnarray}
\beta (g,\epsilon) &=& -\epsilon g + \sum_{i=1}^{\infty} b_{i-1} g^{i+1} = -\epsilon g + b_0 g^2 + b_1 g^3 +\ldots
\end{eqnarray}
where $b_i$, $i=0,1,\ldots$ are the standard beta function coefficients. We are interested in the fixed points $\beta(g_*,\epsilon) = 0$ of the theory in $d= 4 - \epsilon$ dimensions. Clearly the trivial fixed point $g_*= 0$ always exists. At one loop the beta function also possesses a non-trivial fixed point $g_* = \frac{\epsilon}{b_0} $. This is the Wilson-Fisher fixed point.
We will be interested in the physics of the Wilson-Fisher fixed point and how to derive a closed form exact expression for the fixed point value of the coupling $g_*$ to all orders in $\epsilon$. First write the fixed point equation $\beta(g_*,\epsilon) = 0$ we want to solve as
\begin{eqnarray}\label{FPequation}\label{zero}
0 &=&- \tilde{\epsilon} + \sum_{i=1}^{\infty} \tilde{b}_{i-1} g_*^{i} = - \tilde{\epsilon} + g_* + \tilde{b}_1 g_*^2 + \tilde{b}_2 g_*^3+ \ldots
\end{eqnarray}
where for convenience we have chosen to normalize everything as $\tilde{\epsilon} = \frac{\epsilon}{b_0}$ and $\tilde{b}_{i} = \frac{b_i}{b_0}$ so that the coefficient of $g_*$ is unity. This is purely a matter of convenience. We want to find the zero $g_*(\tilde{\epsilon})$ which is a power series in $\tilde{\epsilon}$. As already explained this is encoded in the Lagrange inversion theorem. Finding $g_*(\tilde{\epsilon})$ amounts to determining the coefficients $g_i$ in the power series
\begin{eqnarray}
g_*(\tilde{\epsilon}) &=& \sum_{i=1}^{\infty} g_{i-1} \tilde{\epsilon}^i = g_0 \tilde{\epsilon} +g_1 \tilde{\epsilon}^2 + g_2 \tilde{\epsilon}^3+ \ldots
\end{eqnarray}
One way to do this is to first plug the ansatz for $g_*(\tilde{\epsilon})$ back into Eq. \ref{zero} and then expand again in $\tilde{\epsilon}$ to arrive at\footnote{An alternative method can again be found in \cite{MorseFeshbach}}
\begin{eqnarray}\label{expansion}
0 &=& - \tilde{\epsilon} + \sum_{j=1}^{\infty} \tilde{b}_{j-1} \left( \sum_{l=1}^{\infty} g_{l-1} \tilde{\epsilon}^l \right)^j = \left( g_0-1\right) \tilde{\epsilon} + \left( g_1 + g_0^2 \tilde{b}_1 \right) \tilde{\epsilon}^2 + \left(g_2 + 2 g_0g_1 \tilde{b}_1 + g_0^3\tilde{b}_2 \right) \tilde{\epsilon}^3 + \ldots + c_i \tilde{\epsilon}^i+ \dots \nonumber \\
\end{eqnarray}
In order to write an explicit closed form expression for the $i$'th coefficient $c_i$ we need to know what terms in the composite set of sums will contribute at $O(\tilde{\epsilon}^i)$. If we first look at the infinite sum $ \sum_{l=1}^{\infty} g_{l-1} \tilde{\epsilon}^l $ then clearly only a finite number of terms $g_0 \tilde{\epsilon} +g_1 \tilde{\epsilon}^2 +\ldots + g_{k-1} \tilde{\epsilon}^k$ for some $k \leq i$, can eventually contribute at $O(\tilde{\epsilon}^i)$. How large can $k$ be until the term $g_{k-1} \tilde{\epsilon}^k$ will no longer contribute at $O(\tilde{\epsilon}^i)$? Once we take the finite number of terms to the power $j$, the term that involves a single factor of $g_{k-1} \tilde{\epsilon}^k$ and is lowest order in $\tilde{\epsilon}$ is $(g_0 \tilde{\epsilon})^{j-1} g_{k-1} \tilde{\epsilon}^k $. So if this is to be of order $O(\tilde{\epsilon}^i)$ then clearly $k=i-j+1$. Using the multinomial formula we can therefore write
\begin{eqnarray}
\left(g_0 \tilde{\epsilon} +g_1 \tilde{\epsilon}^2 +\ldots + g_{i-j} \tilde{\epsilon}^{i-j+1} \right)^j = \sum_{j_1+\ldots + j_{i-j+1} = j} \frac{j!}{j_1 !\cdots j_{i-j+1}!} \left( g_0^{j_1} \cdots g_{i-j}^{j_{i-j+1}} \right) \tilde{\epsilon}^{1j_1 + \ldots + (i-j+1) j_{i-j+1}}
\end{eqnarray}
where the sum is over all sequences $j_1,\ldots,j_{i-j+1}$. Precisely for $1j_1 + \ldots + (i-j+1) j_{i-j+1} = i$ we pick up the term of $O(\tilde{\epsilon}^i)$ and we can therefore read off the desired coefficient as
\begin{eqnarray}\label{coefficient}
c_i = \frac{1}{i!} \sum_{j=1}^i j! \tilde{b}_{j-1} B_{i,j} \left(1! g_0,\ldots, (i-j+1)!g_{i-j} \right)
\end{eqnarray}
We have here chosen to write the coefficient $c_i$ in terms of the Bell polynomials
\begin{eqnarray}
B_{i,j}(x_1,\ldots,x_{i-j+1}) = \mathop{\sum_{j_1 + \ldots + j_{i-j+1} =j}}_{1j_1 +\ldots + (i-j+1) j_{i-j+1} =i} \frac{i!}{j_1!\cdots j_{i-j+1}!} \left(\frac{x_1}{1!} \right)^{j_1} \cdots \left( \frac{x_{i-j+1}}{(i-j+1)!} \right)^{j_{i-j+1}}
\end{eqnarray}
For completeness we note that it is also possible to find the coefficient by using the Fa{\`a} di Bruno formula for the $i$'th derivative of a composite function. However the above derivation is quite straightforward and perhaps not everyone is familiar with the Fa{\`a} di Bruno formula.\footnote{The Fa{\`a} di Bruno formula is \begin{eqnarray}
\frac{d^i }{d x^i} f(h(x)) &=& \sum_{j=1}^i f^{(j)} (h(x)) B_{i,j} \left( h'(x), h''(x), \dots, h^{(i-j+1)} (x) \right)
\end{eqnarray}} Also for a discussion on Bell polynomials we refer the reader to \cite{Comtet}.
Bell polynomials are well known in combinatorics. They give a way to encode, in a polynomial, the partitioning of a set into non-empty, non-overlapping subsets. Assume we have a set of $i$ elements that we want to partition into $j$ non-empty, non-overlapping subsets. Each subset in the partition will contain some number of original elements $k$, which can be any $k=1,\ldots,i-j+1$. If there are $l$ such subsets and this specific partitioning can be done in $m$ ways then this is encoded as $mx_k^l$ in the Bell polynomial $B_{i,j}(x_1,\ldots,x_{i-j+1})$.\footnote{An example: Take a set with $i=5$ elements $\{a,b,c,d,e\}$ that we want to partition into $j=3$ subsets. First this can be done as $\{\{ a,b \},\{ c,d\}, \{e\}\}$ where $l=1$ subsets contain $k=1$ elements and $l=2$ subsets contain $k=2$ elements. This partitioning can be done in $m=15$ different ways. The partitioning into three subsets can also be done as $\{ \{a,b,c \},\{d\},\{e\} \}$ where $l=2$ subsets contain $k=1$ elements and $l=1$ subsets contain $k=3$ elements. This partitioning can be done in $m=10$ different ways. All this is precisely encoded in the Bell polynomial $B_{5,3}(x_1,x_2,x_3) = 15 x_1x_2^2+ 10x_1^2x_3$ }
The fact that the coefficients $c_i$ are combinatorial in the $\tilde{b}_i$'s and $g_i$'s should of course not come as a surprise since they are the $i$'th derivative of a composite function. Now it is important to realize that the expansion in Eq. \ref{expansion} should hold for varying $\tilde{\epsilon}$ so we can equate coefficients order by order in $\tilde{\epsilon}$. The coefficient of $\tilde{\epsilon}$ allows to solve for $g_0$, the coefficient of $\tilde{\epsilon}^2$ allows to solve for $g_1$ (already knowing $g_0$) and the coefficient of $\tilde{\epsilon}^3$ allows to solve for $g_2$ (already knowing $g_0$ and $g_1$). This can be done order by order to any order. In general the coefficient $c_i$ of $\tilde{\epsilon}^i$ depends on $g_0\ldots,g_{i-1}$ and is linear in $g_{i-1}$ so one can always solve for it (already knowing $g_1,\ldots,g_{i-2}$). It is linear in $g_{i-1}$ since the Bell polynomials satisfy the following identity $B_{i,1}(1!g_0,\ldots,i!g_{i-1}) = i! g_{i-1} $ which comes about in the $j=1$ term in the sum in $c_i$. Order by order we then find
\begin{eqnarray}
g_0 &=& 1\label{g0combinatoric} \\
g_1 &=& - \tilde{b}_1 \label{g1combinatoric} \\
g_2 &=& 2\tilde{b}_1^2 -\tilde{b}_2 \label{g2combinatoric} \\
g_3 &=& -5\tilde{b}_1^3 + 5 \tilde{b}_1\tilde{b}_2 - \tilde{b}_3 \label{g3combinatoric} \\
g_4 &=& 14 \tilde{b}_1^4 - 21 \tilde{b}_1^2 \tilde{b}_2 + 3 \tilde{b}_2^2 + 6 \tilde{b}_1 \tilde{b}_3 - \tilde{b}_4 \label{g4combinatoric}\\
&\vdots& \nonumber \\
g_{i-1} &=& \frac{1}{i!} \sum_{j=1}^{i-1} (-1)^j \tilde{i} B_{i-1,j} \left(1! \tilde{b}_1 ,\ldots, (i-j)!\tilde{b}_{i-j} \right) \label{gcombinatoric}
\end{eqnarray}
where $\tilde{i} = i(i+1)\cdots (i+j-1)$. This gives us every coefficient $g_{i-1}$ and hence the value of the fixed point coupling to all orders in $\epsilon$ in terms of the beta function coefficients
\begin{eqnarray}\label{eq:g}
g_*(\epsilon) = \sum_{i=1}^{\infty} g_{i-1} \left( \frac{\epsilon}{b_0} \right)^i \ , \qquad g_{i-1} = \frac{1}{i!} \sum_{j=1}^{i-1} (-1)^j \tilde{i} B_{i-1,j} \left(\frac{1! b_1}{b_0} ,\ldots, \frac{(i-j)! b_{i-j}}{b_0} \right) \ , \qquad g_0=1
\end{eqnarray}
This is a very concise and compact formula for the fixed point value to all orders in $\epsilon$ and is combinatoric in the sense that it is given in terms of the Bell polynomials.
Note that for a given $i$ the coefficient $g_{i-1}$ depends on the $i$ loop beta function coefficients $b_0,\ldots,b_{i-1}$ only and does not receive corrections from higher orders. So for the scalar $O(N)$ symmetric model where the beta function is known to six loops \cite{Kompaniets:2016hct} we can calculate the first six coefficients $g_i$, $i=0,\dots,5$. Upon insertion of the six loop beta function coefficients into $g_i$, $i=0,\dots,5$ we find complete agreement with the reported results in the Mathematica file accompanying \cite{Kompaniets:2016hct}. This is a check of our formal manipulations.
\section{The $\epsilon$-Expansion as a Geometric Expansion: Associahedra}\label{associahedron}
It was expected that the $\epsilon$-expansion of the Wilson-Fisher coupling order by order is some combinatorial function of the beta function coefficients. However it has only very recently become clear to the mathematicians \cite{Loday,Loday2,Aguiar} that Lagrange inversion also has a geometric interpretation. In fact instead of viewing the arrangement of the beta function coefficients at each order in $\epsilon$ as a combinatorial exercise we can see it as being controlled by a specific polytope known as an associahedron. We note that recently the associahedron also has found its way into other branches of theoretical physics including scattering amplitudes \cite{Mizera:2017cqs,Arkani-Hamed:2017mur}.
There are many ways to realize an associahedron \cite{Tamari,Stasheff1,Stasheff2,Loday,Loday2} (see \cite{Ziegler} for an introduction). We will define the associahedron $K_i$ as a convex polytope of dimension $i-2$. If we have a string of $i$ elements then each vertex corresponds to inserting parentheses in this string and each edge corresponds to using the associativity rule for replacing the parentheses a single time. We now construct the first few associahedra with $i=1,\ldots,5$. In Fig. \ref{fig:sub1}--\ref{fig:sub3} we show the zero, one, two and three dimensional associahedra $K_2$, $K_3$, $K_4$ and $K_5$.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.5\linewidth]{K2K3.pdf}
\caption{$K_2$ (point) and $K_3$ (line)}
\label{fig:sub1}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.5\linewidth]{K4.pdf}
\caption{$K_4$ (pentagon)}
\label{fig:sub2}
\end{subfigure}
\\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.4\linewidth]{K5front.pdf}\qquad
\includegraphics[width=.4\linewidth]{K5back.pdf}
\caption{Front and back of $K_5$}
\label{fig:sub3}
\end{subfigure}
\caption{The associahedra $K_2$, $K_3$, $K_4$ and $K_5$.}
\label{fig:K2K3K4K5}
\end{figure}
\begin{itemize}
\item If $i=1$ the associahedron $K_1$ is defined to be the empty set.
\item If $i=2$ there is only one way to put a set of parentheses in the string of $i=2$ elements $(ab)$. So the zero dimensional associahedron $K_2$ is a point.
\item If $i=3$ there are two ways to put the parentheses in the string of $i=3$ elements $(ab)c$ and $a(bc)$. So there are two vertices and the one dimensional associahedron $K_3$ is a line.
\item If $i=4$ there are five different ways to put the parentheses in the string of $i=4$ elements $((ab)c)d$, $(ab)(cd)$, $a(b(cd))$, $a((bc)d)$ and $(a(bc))d$. So there are five vertices and the two dimensional associahedron $K_4$ is the pentagon.
\item If $i=5$ there are fourteen ways to put the parentheses in the string of $i=5$ elements $((ab)(cd))e$, $(ab)((cd)e)$, $((ab)c)(de)$, $(ab)(c(de))$, $(a(bc))(de)$, $a((bc)(de))$, $((a(bc))d)e$, $(a((bc)d))e$, $a(((bc)d)e)$, $a((b(cd))e)$, $a(b((cd)e))$, $(a(b(cd)))e$, $(((ab)c)d)e$, $a(b(c(de)))$. So there are fourteen vertices and the three dimensional associahedron $K_5$ is composed of six pentagons and three squares.
\end{itemize}
A few facts about the associahedron are relevant for us. Details can be found in \cite{Tamari,Stasheff1,Stasheff2,Loday,Loday2,Ziegler}. The associahedron $K_i$ of dimension $i-2$ have faces of dimension $j$ with $j\leq i-2$. There is of course only one face of dimension $j=i-2$ which is the associahedron itself. The number of $j$-dimensional faces with $j\leq i-3$ of the associahedron $K_i$ is \cite{simion}
\begin{eqnarray}
T(i,j) = \frac{1}{i-j-1} {{i-2}\choose {i-j-2}} {{2i-j-2}\choose {i-j-2}}
\end{eqnarray}
In the number triangle in Fig. \ref{numbertriangle} we show the number of faces of different dimensions for the first few associahedra. Specifically the number of vertices $T(i,0) = \frac{1}{i} {{2i-2}\choose {i-1}}$ is known as the $(i-1)$'th Catalan number and is the leftmost column in the number triangle. Finally we need that Cartesian products of lower dimensional associahedra $F = K_{i_1} \times \cdots \times K_{i_m}$ are isomorphic to the faces of the associahedron $K_i$. This also includes the faces which are not themselves an associahedron such as the three squares in $K_5$ which are isomorphic to $K_3\times K_3$.
\begin{figure}
\begin{tabular}{|c|ccccc|}
\hline
$i$\textbackslash$j$ & 0 & 1 & 2 & 3 & 4 \\
\hline
1 & - & - & - & - & - \\
2 & 1 & - & - & - & - \\
3 & 2 & 1 & - & - & - \\
4 & 5 & 5 & 1 & - & - \\
5 & 14 & 21 & 9 & 1 & - \\
6 & 42 & 84 & 56 & 14 & 1 \\
\hline
\end{tabular}
\caption{Number triangle showing the number of $j$-dimensional faces $T(i,j)$ of the associahedron $K_i$. }
\label{numbertriangle}
\end{figure}
Having introduced the associahedra we now turn to how they control the $\epsilon$-expansion of the value of the coupling at the Wilson-Fisher fixed point. To each coefficient $g_{i-1}$ of the Wilson-Fisher fixed point coupling we associate the associahedron $K_i$. We will present the general result for any $i$ and then motivate it by looking at a number of examples. The calculation of the coefficient $g_{i-1}$ is controlled by the associahedron $K_i$ and is \cite{Loday,Loday2,Aguiar}
\begin{eqnarray}\label{ggeometric}
g_{i-1} = \sum_{F\ \text{face of}\ K_i} (-1)^{i+1-\dim F} \tilde{b}_F
\end{eqnarray}
where $\tilde{b}_F = \tilde{b}_{i_1-1} \cdots \tilde{b}_{i_m-1} $ for each face $F=K_{i_1}\times \cdots \times K_{i_m}$ with $i_1 +\ldots + i_m -m = i-1$ of the associahedron $K_i$. The sum is over all faces of the associahedron. In this way the facial structure of the associahedra controls the $\epsilon$-expansion of the coupling at the Wilson-Fisher fixed point. Consider now the first few examples so we can obtain a better feel for how it works.
\begin{itemize}
\item If $i=1$ the associahedron $K_1$ is the empty set and for this, by definition, we choose $g_0=1$.
\item If $i=2$ the associahedron $K_2$ has a single face shaped like $F=K_2$ and is a point of dimension $\dim F =0$. The associated coefficient is
\begin{eqnarray}
g_1 = (-1)^{i+1-\dim F} \tilde{b}_{2-1} =-\tilde{b}_1
\end{eqnarray}
\item If $i=3$ the associahedron $K_3$ has two faces shaped like $F_1 = K_2 \times K_2$ which are points and each of dimension $\dim F_1 =0$, and a single face shaped like $F_2 = K_3$ which is a line and is of dimension $\dim F_2 = 1$. The associated coefficient is
\begin{eqnarray}
g_2 = 2(-1)^{i+1-\dim F_1} \tilde{b}_{2-1}\tilde{b}_{2-1} + (-1)^{i+1-\dim F_2} \tilde{b}_{3-1} = 2 \tilde{b}_1^2 - \tilde{b}_2
\end{eqnarray}
\item If $i=4$ the associahedron $K_4$ has five faces shaped like $F_1 = K_2\times K_2 \times K_2$ which are points and each of dimension $\dim F_1 = 0$, five faces shaped like $F_2 = K_2 \times K_3$ which are lines and each of dimension $\dim F_2 = 1$ and one face shaped like $F_3 = K_4$ and of dimension $\dim F_3 = 2$. The associated coefficient is
\begin{eqnarray}
g_3 &=& 5 (-1)^{i+1-\dim F_1} \tilde{b}_{2-1} \tilde{b}_{2-1} \tilde{b}_{2-1} + 5 (-1)^{i+1- \dim F_2} \tilde{b}_{2-1} \tilde{b}_{3-1} + (-1)^{i+1-\dim F_3} \tilde{b}_{4-1} \nonumber \\
&=& -5 \tilde{b}_1^3 + 5 \tilde{b}_1 \tilde{b}_2 - \tilde{b}_3
\end{eqnarray}
\item If $i=5$ the associahedron $K_5$ has fourteen faces shaped like $F_1 = K_2 \times K_2 \times K_2 \times K_2$ which are points and of dimension $\dim F_1 = 0$, twenty one faces shaped like $F_2 = K_2\times K_2\times K_3$ which are lines and of dimension $\dim F_2 =1$, three faces shaped like $F_3 = K_3 \times K_3$ which are squares and of dimension $\dim F_3 = 2$, six faces shaped like $F_4 = K_2 \times K_4$ which are pentagons and of dimension $\dim F_4 = 2$ and one face shaped like $F_5 = K_5$ which is of dimension $\dim F_5 = 3$. The associated coefficient is found by the rule
\begin{eqnarray}
g_4 &=& 14 (-1)^{i+1-\dim F_1} \tilde{b}_{2-1}\tilde{b}_{2-1}\tilde{b}_{2-1}\tilde{b}_{2-1} + 21 (-1)^{i+1-\dim F_2} \tilde{b}_{2-1}\tilde{b}_{2-1}\tilde{b}_{3-1} \nonumber \\
&& + 3 (-1)^{i+1-\dim F_3} \tilde{b}_{3-1}\tilde{b}_{3-1} + 6 (-1)^{i+1-\dim F_4} \tilde{b}_{2-1}\tilde{b}_{4-1} + (-1)^{i+1-\dim F_5} \tilde{b}_{5-1} \nonumber \\
&=& 14 \tilde{b}_1^4 - 21 \tilde{b}_1^2 \tilde{b}_2 + 3 \tilde{b}_2^2 + 6 \tilde{b}_1 \tilde{b}_3 - \tilde{b}_4
\end{eqnarray}
\end{itemize}
In all cases is there complete agreement with our combinatorial Eq.'s \ref{g0combinatoric}--\ref{g4combinatoric}. Note that for each $i$ the sum of the indices of the beta function coefficients in $g_{i-1}$ add to $i-1$. This is a result of the condition $i_1 +\ldots + i_m -m = i-1$.
What we have arrived at is a simple and stunningly beautiful closed form expression for the coupling at the Wilson-fisher fixed point to all orders in $\epsilon$. It is dictated by the geometry of the associahedra and at each order in $\epsilon$ is uniquely related to its facial structure
\begin{eqnarray}
g_* (\epsilon) = \sum_{i=1}^{\infty} g_{i-1} \left( \frac{\epsilon}{b_0}\right)^i \ , \qquad g_{i-1} = \sum_{F\ \text{face of}\ K_i} \frac{1}{b_0^m} (-1)^{i+1-\dim F} b_{i_1-1}\cdots b_{i_m-1}
\end{eqnarray}
for each of its faces $F=K_{i_1}\times \cdots \times K_{i_m}$.
\section{Anomalous Dimensions and Critical Exponents}\label{anodims}
Although the Wilson-Fisher fixed point coupling $g_*$ is scheme dependent, anomalous dimensions and critical exponents at the fixed point are not. They are scheme independent physical quantities charaterizing the system.
We write the anomalous dimension of some operator in the theory as $\gamma(g)$. For instance it could be the anomalous dimension of the field $\phi$, the anomalous dimension of the mass or the anomalous dimension of some composite operator $(\phi^i\phi^i)^r$ for some integer $r$. In general the anomalous dimension is written in terms of the formal power series
\begin{eqnarray}
\gamma(g) &=& \sum_{i=1}^{\infty} \gamma_i g^i = \gamma_1 g + \gamma_2 g^2 + \gamma_3 g^3 + \ldots
\end{eqnarray}
At the Wilson-Fisher fixed point we find to all orders in $\epsilon$
\begin{eqnarray}
\gamma(\epsilon) &=& \sum_{j=1}^{\infty} \gamma_j \left( \sum_{k=1}^{\infty} g_{k-1} \left(\frac{\epsilon}{b_0} \right)^k \right)^j = \gamma_1 \frac{\epsilon}{b_0} + \left( g_1\gamma_1 +\gamma_2 \right) \left( \frac{\epsilon}{b_0}\right)^2 + \left( g_2 \gamma_1 +2g_1 \gamma_2 +\gamma_3 \right) \left(\frac{\epsilon}{b_0} \right)^3 + \ldots \nonumber \\
&=& \sum_{i=1}^{\infty} k_i \left( \frac{\epsilon}{b_0} \right)^i \ , \qquad k_i = \frac{1}{i!} \sum_{j=1}^{i} j! \gamma_j B_{i,j}\left(1! g_0, \ldots,(i-j+1)! g_{i-j} \right) \label{eq:gamma}
\end{eqnarray}
where we have found the $i$'th coefficient $k_i$ by the same method that led us to Eq. \ref{coefficient} and the $g_{i-1}$'s are given either by the combinatoric expression Eq. \ref{gcombinatoric} or by the geometric expression Eq. \ref{ggeometric}. This constitutes an exact closed form expression for the anomalous dimension of any operator at the Wilson-Fisher fixed point.
Note that at any order $i$ the coefficient $k_i$ only depends on the $i$ loop beta function coefficients (via $g_{i-1}$), and the coefficients of the $i$ loop anomalous dimension. It does not receive corrections from higher orders and is exact to this order. Again for the scalar $O(N)$ symmetric model using the six loop beta function, six loop field anomalous dimension $\gamma_{\phi}$ and six loop mass anomalous dimension $\gamma_{m^2}$ computed in \cite{Kompaniets:2016hct} we can compute the \emph{correction to scaling} $\omega(\epsilon) = \beta'(g_*(\epsilon),\epsilon)$ and the two \emph{critical exponents} $\eta(\epsilon) = 2 \gamma_{\phi}(g_*(\epsilon))$ and $\nu(\epsilon) = \left[ 2+ \gamma_{m^2}(g_*(\epsilon)) \right]^{-1}$ at the Wilson-Fisher fixed point to $O(\epsilon^6)$. We find complete agreement with the results reported in the Mathematica file accompanying \cite{Kompaniets:2016hct}. The same holds true for the additional critical exponents $\alpha$, $\beta$, $\gamma$ and $\delta$ related to $\eta$ and $\nu$ through the scaling relations $\gamma = \nu (2-\eta)$, $(4-\epsilon) \nu = 2-\alpha$, $\beta\delta = \beta+\gamma$ and $\alpha+2\beta+\gamma=2$. This is a final confirmation of our formal results.
\section{Scalar $O(1)$ Symmetric Model to $O(\epsilon^7)$}\label{O(1)}
We now use our general results to explicitly provide the value of the Wilson-Fisher fixed point coupling and critical exponents for the $O(1)$ model to $O(\epsilon^7)$. To our knowledge these results have still not appeared in the literature. The beta function and anomalous dimensions have been calculated to seven loops in \cite{Schnetz:2016fhy} so we can convert theses computations into a prediction of the Wilson-Fisher fixed point coupling and all the critical exponents to $O(\epsilon^7)$. Our results in this section require only little effort to arrive at and has only been made possible due to the extraordinary computations to six loops in \cite{Kompaniets:2016hct} and seven loops in \cite{Schnetz:2016fhy}. Using Eq. \ref{eq:g} and Eq. \ref{eq:gamma} they are
\begin{eqnarray}
g_*(\epsilon) &=& \frac{1}{3} \epsilon + \frac{17}{81} \epsilon^2 + \left( \frac{709}{17496} - \frac{4}{27} \zeta(3) \right) \epsilon^3 + \left( \frac{10909}{944784} - \frac{106}{729} \zeta(3) - \frac{2}{27} \zeta(4) + \frac{40}{81} \zeta(5) \right) \epsilon^4 \nonumber \\
&& + \left( - \frac{321451}{408146688} - \frac{11221}{104976}\zeta(3) + \frac{11}{81} \zeta(3)^2 - \frac{443}{5832}\zeta(4) + \frac{373}{729} \zeta(5) + \frac{25}{54}\zeta(6) - \frac{49}{27}\zeta(7) \right) \epsilon^5 \nonumber \\
&& + \left( \frac{32174329}{9183300480} - \frac{18707}{7085880} \zeta(3) + \frac{22429}{32805} \zeta(3)^2 + \frac{256}{729} \zeta(3)^3 + \frac{5776}{6075} \zeta(3,5) - \frac{19243}{314928} \zeta(4) \right. \nonumber \\
&& + \frac{38}{243} \zeta(3)\zeta(4) + \frac{448}{729} \zeta(3)\zeta(5) + \frac{63481}{590490} \zeta(5) + \frac{1117}{2187} \zeta(6) - \frac{7946}{3645} \zeta(7) - \frac{88181}{18225} \zeta(8) \nonumber \\
&&\left. + \frac{46112}{6561} \zeta(9) \right) \epsilon^6 + \left( \frac{1661059517}{991796451840} + \frac{45106}{286446699} \pi^{10} - \frac{7383787}{95659380} \zeta(3)- \frac{221281}{1180980}\zeta(3)^2 \right. \nonumber \\
&& + \frac{19696}{19683}\zeta(3)^3 + \frac{20425591}{8266860000}\pi^8 + \frac{58}{54675} \pi^8 \zeta(3) - \frac{161678}{164025} \zeta(3,5) - \frac{112}{27} \zeta(3) \zeta(3,5) \nonumber \\
&& - \frac{1599413}{1417176} \zeta(4) - \frac{1156}{6561} \zeta(3)\zeta(4) + \frac{16}{243} \zeta(4)^2 \frac{129631}{33067440}\pi^6 + \frac{1010}{413343} \pi^6 \zeta(3) - \frac{6227}{229635} \pi^6 \zeta(5) \nonumber \\
&& + \frac{10590889}{85030560} \zeta(5) - \frac{163879}{19683} \zeta(3)\zeta(5) - \frac{1735}{243} \zeta(3)^2\zeta(5) - \frac{640}{729} \zeta(4)\zeta(5) + \frac{5030}{567} \zeta(5)^2 \nonumber \\
&& + \frac{12454}{1215} \zeta(5,3,3) - \frac{423301}{118098} \zeta(6) - \frac{400}{243} \zeta(3)\zeta(6) + \frac{569957}{393660} \zeta(7) + \frac{49}{1458} \zeta(3)\zeta(7) + \frac{316009}{25194240} \pi^4 \nonumber \\
&&+ \frac{4453}{393660}\pi^4 \zeta(3) + \frac{16}{2187} \pi^4 \zeta(3)^2 - \frac{613}{32805} \pi^4 \zeta(5) + \frac{6227}{18225} \pi^4 \zeta(7) - \frac{940}{5103} \zeta(7,3) \nonumber \\
&&\left. - \frac{11992616}{492075} \zeta(8) + \frac{1347170}{177147} \zeta(9) + \frac{6227}{81} \pi^2 \zeta(9) - \frac{8}{2187} P_{7,11} - \frac{651319}{810}\zeta(11) \right) \epsilon^7
\end{eqnarray}
\begin{eqnarray}
\omega(\epsilon) &=& \epsilon - \frac{17}{27} \epsilon^2 + \left( \frac{1603}{2916} + \frac{8}{9} \zeta(3) \right) \epsilon^3 + \left( - \frac{178417}{314928} - \frac{158}{243} \zeta(3) + \frac{2}{3} \zeta(4) - \frac{40}{9} \zeta(5) \right) \epsilon^4 \nonumber \\
&&+ \left( \frac{20734249}{34012224} + \frac{12349}{8748} \zeta(3) - \frac{4}{9} \zeta(3)^2 - \frac{79}{162} \zeta(4) + \frac{2324}{729} \zeta(5) - \frac{50}{9} \zeta(6) + \frac{196}{9} \zeta(7) \right) \epsilon^5 \nonumber \\
&&+ \left( -\frac{853678429}{1224440064} - \frac{6886777}{2834352} \zeta(3) - \frac{16904}{2187} \zeta(3)^2 - \frac{1280}{243} \zeta(3)^3 - \frac{5776}{405} \zeta(3,5) + \frac{12349}{11664} \zeta(4) \right. \nonumber \\
&& - \frac{2}{3} \zeta(3)\zeta(4) - \frac{95713}{39336} \zeta(5) - \frac{4960}{243} \zeta(3) \zeta(5) + \frac{5405}{1458} \zeta(6) - \frac{961}{81} \zeta(7) + \frac{88181}{1215} \zeta(8) \nonumber \\
&& \left. - \frac{230560}{2187} \zeta(9) \right) \epsilon^6 + \left( \frac{99202757785}{132239526912} - \frac{316009}{1399680} \pi^4 - \frac{129631}{1837080} \pi^6 - \frac{20425591}{459270000} \pi^8 \right. \nonumber \\
&& -\frac{90212}{31827411} \pi^{10} \zeta(11) + \frac{480656027}{102036672} \zeta(3) - \frac{4453}{21870} \pi^4 \zeta(3) - \frac{2020}{45927} \pi^6 \zeta(3) - \frac{116}{6075} \pi^8 \zeta(3) \nonumber \\
&& + \frac{1737593}{78732} \zeta(3)^2 - \frac{32}{243} \pi^4 \zeta(3)^2 - \frac{64312}{6561} \zeta(3)^2+ \frac{508228}{10935} \zeta(3,5) + \frac{224}{3} \zeta(3) \zeta(3,5) + \frac{34951705}{1889568} \zeta(4) \nonumber \\
&& + \frac{4907}{729} \zeta(3) \zeta(4) - \frac{16}{27} \zeta(4)^2 + \frac{198223}{314928} \zeta(5)+ \frac{1226}{3645} \pi^4 \zeta(5) + \frac{12454}{25515} \pi^6 \zeta(5) + \frac{385046}{2187} \zeta(3) \zeta(5) \nonumber \\
&& + \frac{3470}{27} \zeta(3)^2 \zeta(5) + \frac{640}{81} \zeta(4) \zeta(5) - \frac{226820}{1701} \zeta(5)^2 - \frac{24908}{135} \zeta(5,3,3) + \frac{10053541}{157464} \zeta(6) \nonumber \\
&& + \frac{1300}{81} \zeta(3) \zeta(6) - \frac{755965}{26244} \zeta(7) - \frac{12454}{2025} \pi^4 \zeta(7) + \frac{1421}{27} \zeta(3) \zeta(7) + \frac{1880}{567}\zeta(7,3) \nonumber \\
&&\left. + \frac{47970464}{164025} \zeta(8) + \frac{4459444}{59049} \zeta(9) - \frac{12454}{9} \pi^2 \zeta(9) +\frac{16}{243} P_{7,11} \right) \epsilon^7
\end{eqnarray}
\begin{eqnarray}
\eta(\epsilon) &=& \frac{1}{54} \epsilon^2 + \frac{109}{5832} \epsilon^3+\left( \frac{7217}{629856} -\frac{4}{243} \zeta(3) \right) \epsilon^4 + \left( \frac{321511}{68024448} - \frac{329}{17496} \zeta(3) - \frac{1}{84} \zeta(4) + \frac{40}{729} \zeta(5) \right) \epsilon^5 \nonumber \\
&& \left( \frac{46425175}{264479053824} - \frac{4247}{25194240} \pi^4 - \frac{71}{1180980} \pi^6 - \frac{2063}{229635000} \pi^8- \frac{1978411}{204073344} \zeta(3) \right. \nonumber \\
&&- \frac{1}{21870}\pi^4 \zeta(3) + \frac{10027}{157464} \zeta(3)^2 + \frac{256}{6561} \zeta(3)^3 + \frac{244}{2187} \zeta(3,5) + \frac{11969}{3779136} \zeta(4) +\frac{22}{729} \zeta(3) \zeta(4) \nonumber \\
&&\left. + \frac{59917}{1889568} \zeta(5) + \frac{50}{729} \zeta(3) \zeta(5) + \frac{42397}{314928} \zeta(6) - \frac{3707}{17496} \zeta(7) - \frac{88181}{164025} \zeta(8) + \frac{46112}{59049} \zeta(9) \right) \epsilon^7 \nonumber \\
\end{eqnarray}
\begin{eqnarray}
\nu(\epsilon) &=&\frac{1}{2}+ \frac{1}{12} \epsilon + \frac{7}{162} \epsilon^2 + \left( \frac{1783}{69984} - \frac{1}{27}\zeta(3) \right) \epsilon^3 + \left( \frac{92969}{7558272} -\frac{191}{5832} \zeta(3) - \frac{1}{36} \zeta(4) + \frac{10}{81} \zeta(5) \right) \epsilon^4 \nonumber \\
&&+ \left( \frac{4349263}{816293376} - \frac{6323}{209952} \zeta(3) + \frac{2}{81} \zeta(3)^2 - \frac{191}{7776} \zeta(4) + \frac{74}{729} \zeta(5) + \frac{25}{162} \zeta(6) - \frac{49}{108} \zeta(7) \right) \epsilon^5 \nonumber \\
&&\left( \frac{65712521}{29386561536} -\frac{58565}{2519424}\zeta(3) + \frac{2807}{26244} \zeta(3)^2 + \frac{64}{729} \zeta(3)^3 + \frac{61}{243} \zeta(3,5) - \frac{6323}{279936} \zeta(4) \right. \nonumber \\
&& + \frac{1}{27} \zeta(3) \zeta(4) + \frac{132893}{1889568} \zeta(5) + \frac{184}{729} \zeta(3) \zeta(5) + \frac{1615}{11664} \zeta(6) - \frac{4255}{11664} \zeta(7) - \frac{16337}{11664} \zeta(8) \nonumber \\
&& \left. + \frac{11528}{6561} \zeta(9) \right) \epsilon^6 + \left( \frac{3466530079}{3173748645888} +\frac{312061}{100776960} \pi^4 + \frac{463493}{396809280} \pi^6 + \frac{7085207}{11022480000} \pi^8 \right. \nonumber \\
&&+ \frac{22553}{477411165} \pi^{10} - \frac{651319}{3240} \zeta(11) -\frac{53182423}{2448880128} \zeta(3) + \frac{35}{11664} \pi^4 \zeta(3) + \frac{79}{91854} \pi^6 \zeta(3) \nonumber \\
&& + \frac{29}{109350} \pi^8 \zeta(3) + \frac{244339}{3779136} \zeta(3)^2 + \frac{8}{3645} \pi^4 \zeta(3)^2 + \frac{3991}{19683} \zeta(3)^3 - \frac{13633}{131220} \zeta(3,5) \nonumber \\
&& - \frac{28}{27} \zeta(3) \zeta(3,5) - \frac{47}{5103} \zeta(3,7) - \frac{248687}{839808} \zeta(4) - \frac{959}{8748} \zeta(3) \zeta(4) + \frac{2}{81} \zeta(4)^2 + \frac{3664579}{68024448} \zeta(5) \nonumber \\
&& - \frac{1393}{262440} \pi^4 \zeta(5) - \frac{6227}{918540} \pi^6 \zeta(5) - \frac{23827}{19683} \zeta(3) \zeta(5) - \frac{1735}{972} \zeta(3)^2 \zeta(5) - \frac{200}{729} \zeta(4) \zeta(5) \nonumber \\
&& + \frac{26008}{15309} \zeta(5)^2 + \frac{6227}{2430} \zeta(5,3,3) - \frac{3800527}{3779136} \zeta(6) - \frac{725}{1458} \zeta(3) \zeta(6) - \frac{1951}{157464} \zeta(7) + \frac{6227}{72900} \pi^4 \zeta(7) \nonumber \\
&& \left. - \frac{35}{972} \zeta(3) \zeta(7) - \frac{235}{5103} \zeta(7,3) - \frac{52036931}{7873200} \zeta(8) + \frac{2654189}{2834352} \zeta(9) + \frac{6227}{324} - \frac{2}{2187} P_{7,11} \right) \epsilon^7
\end{eqnarray}
where
\begin{eqnarray}
\zeta(k) = \sum_{n=1} \frac{1}{n^k} \ , \qquad \zeta(k_i,\dots,k_1) \sum_{n_i>\ldots>n_1 \geq 1} \frac{1}{n_i^{k_i} \cdots n_1^{k_1}}
\end{eqnarray}
and $P_{7,11}$ was calculated in \cite{Panzer:2015ida}. Numerically it is $P_{7,11} = 200.357566....$. We also evaluate the fixed point coupling and critical exponents numerically. They are
\begin{eqnarray}
g_*(\epsilon) &=& 0.333333\epsilon +0.209877 \epsilon^2 - 0.137559 \epsilon^3 + 0.268653 \epsilon^4 - 0.843685 \epsilon^5 +3.15437 \epsilon^6 \nonumber \\
&& -13.4831 \epsilon^7 \\
\omega(\epsilon) &=& \epsilon - 0.62963 \epsilon^2 + 1.61822 \epsilon^3 - 5.23514 \epsilon^4 + 20.7498 \epsilon^5 - 93.1113 \epsilon^6 + 458.742 \epsilon^7 \\
\eta(\epsilon) &=& 0.0185185\epsilon^2 + 0.01869 \epsilon^3 - 0.00832877 \epsilon^4 + 0.0256565 \epsilon^5 - 0.0812726 \epsilon^6 + 0.314749 \epsilon^7 \\
\nu(\epsilon) &=& 0.5 + 0.083333 \epsilon + 0.0432099 \epsilon^2 - 0.0190434 \epsilon^3 + 0.0708838 \epsilon^4 - 0.217 018 \epsilon^5 + 0.829313 \epsilon^6 \nonumber \\
&& - 3.57525 \epsilon^7
\end{eqnarray}
\section{Discussion}\label{conclusions}
In this work we have used Lagrange inversion to derive the exact form of the Wilson-Fisher fixed point coupling and critical exponents in the $\epsilon$-expansion in terms of the coefficients of the appropriate renormalization group functions. We have also argued that the $\epsilon$-expansion of the Wilson-Fisher fixed point coupling can be viewed in a geometric sense and is controlled by the associahedra.
Although we explicitly discussed the scalar $O(n)$ symmetric model in $d=4-\epsilon$ dimensions in the introduction our analysis is certainly not restricted to this set of theories. In fact in the above derivations it could be any theory with a single coupling and for which $b_0 >0$ so that the Wilson-Fisher fixed point appears for $d<4$, ($\epsilon>0$). One could also imagine extending our analysis to multiple couplings. Also for theories for which $b_0<0$ one could perform similar studies as above for $d>4$, ($\epsilon<0$). In this way we envision many new projects worth investigating. We should mention that our analysis is independent of whether the various different renormalization group functions have a finite radius of convergence or are asymptotic. This is irrelevant to the Lagrange inversion procedure as given here order by order.
One of our original hopes was that in deriving a closed form expression order by order for the Wilson-Fisher fixed point coupling we would gain further insight into the asymptotic nature of the expansion. Whether we view the expansion as combinatoric in terms of Bell polynomials or as geometric in terms of the facial structure of associahedra this hope however has unfortunately not been fulfilled. Nevertheless we believe that our results are useful since one at least has an understanding of how the various beta function coefficients enter at a given order for example.
Finally we used our general results to compute the Wilson-Fisher fixed point coupling and the associated critical exponents to $O(\epsilon^7)$ for the $O(1)$ model. We also gave the numerical values.
{\flushleft\textbf{Acknowledgments:}} We thank G. Dunne, E. M\o lgaard and R. Shrock for important discussions. TAR is partially supported by the Danish National Research Foundation under the grant DNRF:90.
\bibliographystyle{apsrev4-1} | {"config": "arxiv", "file": "1910.12631/EpsilonExpansion.tex"} |
TITLE: Verifying inverse Laplace transformation of an expression
QUESTION [0 upvotes]: I tried solving the next inverse Laplace transformation myself:
$$f(t)=L^{-1}\Bigl({{s}\over {s-C}}X(s)\Bigl)={dx(t)\over{dt}}*e^{Ct}$$
but I am not sure if it is correct. I can not find a similar example in my text books. Can anyone please tell me if this is correct?
Note: the operator $*$ is the convolution operator.
Thank you for your time.
REPLY [1 votes]: It should be
$$f(t)=\mathcal{L}^{-1}\Bigl({{s}\over {s-C}}X(s)\Bigl)=\mathcal{L}^{-1}\Bigl({{s-C+C}\over {s-C}}X(s)\Bigl)=\mathcal{L}^{-1}\Bigl(X(s)+\frac{C}{s-C}X(s)\Bigl) \\ =x(t)+C x(t)*e^{Ct}$$
since $\mathcal{L}^{-1}(\frac{1}{s-C})=e^{Ct}$ and the convolution property
The slight difference is because of the extra term in the Laplace transform of the derivative
$$\mathcal{L}\left({dx(t)\over{dt}}*e^{Ct}\right)=(sX(s) \color{red}{-x(0)})\frac{1}{s-C}=\frac{s}{s-C}X(s)\color{red}{-\frac{x(0)}{s-C}}$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 4203862} |
\begin{document}
\maketitle
\begin{abstract}
Numerical schemes {\em provably} preserving
the positivity of density and pressure
are highly desirable
for ideal
magnetohydrodynamics (MHD), but the rigorous positivity-preserving (PP) analysis remains
challenging.
The difficulties mainly arise from the intrinsic complexity of the MHD equations as well as the indeterminate relation between the PP property and the divergence-free condition on the magnetic field. This paper presents the first
rigorous PP analysis of conservative schemes with the Lax-Friedrichs (LF) flux for one- and multi-dimensional ideal MHD.
The significant innovation is
the discovery of the theoretical connection between the PP property and a discrete divergence-free (DDF) condition. This connection is established through the generalized LF splitting properties, which are alternatives of the usually-expected LF splitting property that does not hold for ideal MHD. The generalized LF splitting properties involve a number of admissible states strongly coupled by the DDF condition, making their derivation very difficult. We derive these properties via
a novel equivalent form of the admissible state set and an important inequality, which is skillfully constructed by technical estimates. Rigorous PP analysis is then presented for finite volume and discontinuous Galerkin schemes with the LF flux on uniform Cartesian meshes. In the 1D case, the PP property is proved for the first-order scheme with proper
numerical viscosity, and also for arbitrarily high-order schemes under conditions accessible by a PP limiter. In the 2D case, we show that the DDF condition is necessary and crucial for achieving the PP property. It is observed that even slightly violating the proposed DDF condition may
cause failure to preserve
the positivity of pressure.
We prove that the 2D LF type scheme with proper numerical viscosity preserves both the positivity and the DDF condition.
Sufficient conditions are derived for 2D PP high-order schemes, and extension to 3D is discussed. Numerical examples further confirm the theoretical findings.
\end{abstract}
\begin{keywords}
compressible magnetohydrodynamics, positivity-preserving, admissible states, discrete divergence-free condition, generalized Lax-Friedrichs splitting, hyperbolic conservation laws
\end{keywords}
\begin{AMS}
65M60, 65M08, 65M12,
35L65, 76W05
\end{AMS}
\section{Introduction}
\label{sec:intro}
Magnetohydrodynamics (MHD) play an important role in many fields including astrophysics, space physics and
plasma physics, etc.
The $d$-dimensional ideal compressible MHD equations can be
written as
\begin{equation}\label{eq:MHD}
\frac{{\partial {\bf U}}}{{\partial t}} + \sum\limits_{i = 1}^d {\frac{{\partial { {\bf F}_i}({\bf U})}}{{\partial x_i}}} = {\bf 0},
\end{equation}
together with the divergence-free condition on the magnetic field
${\bf B}=(B_1,B_2,B_3)$,
\begin{equation}\label{eq:2D:BxBy0}
\sum\limits_{i = 1}^d \frac{\partial B_i } {\partial x_i} =0,
\end{equation}
where $d=1, 2$ or $3$. In \eqref{eq:MHD}, the conservative vector ${\bf U} = ( \rho,\rho {\bf v},{\bf B},E )^{\top}$,
and ${\bf F}_i({\bf U})$ denotes the flux in the $x_i$-direction, $i=1,\cdots,d$, defined by
\begin{align*}
{\bf F}_i({\bf U}) = \Big( \rho v_i,~\rho v_i {\bf v} - B_i {\bf B} + p_{tot} {\bf e}_i,~v_i {\bf B} - B_i {\bf v},~v_i(E+p_{tot} ) - B_i({\bf v}\cdot {\bf B})
\Big)^{\top}.
\end{align*}
Here $\rho$ is the density, the vector ${\bf v}=(v_1,v_2,v_3)$ denotes
the fluid velocity,
$p_{\rm tot}$ is the total pressure consisting of
the gas pressure $p$ and magnetic pressure $p_m= \frac{|{\bf B}|^2}2$,
the vector ${\bf e}_i$ represents the $i$-th row of the unit matrix of size 3, and
$E=\rho e + \frac12 \left( \rho |{\bf v}|^2 + |{\bf B}|^2 \right)$ is the total energy
consisting of thermal, kinetic and magnetic energies with $e$ denoting the specific internal energy.
The equation of state (EOS) is needed to
close the system \eqref{eq:MHD}--\eqref{eq:2D:BxBy0}.
For ideal gases it is given by
\begin{equation}\label{eq:EOS}
p = (\gamma-1) \rho e,
\end{equation}
where the adiabatic index $\gamma>1$ . Although \eqref{eq:EOS} is widely used,
there are situations where it is more appropriate to use other EOS. A general EOS can be
expressed as
\begin{equation}\label{eq:gEOS}
p = p(\rho,e),
\end{equation}
which is assumed to satisfy the following condition
\begin{equation}\label{eq:assumpEOS}
\mbox{if}\quad \rho > 0,\quad \mbox{then}\quad e>0~\Leftrightarrow~p(\rho,e) > 0.
\end{equation}
Such condition is reasonable, holds for the ideal EOS \eqref{eq:EOS} and was also
used
in \cite{Zhang2011}.
Since
\eqref{eq:MHD}
involves strong nonlinearity,
its analytic treatment is very difficult. Numerical simulation
is
a primary
approach to
explore the physical mechanisms in MHD.
In the past few decades, the numerical study of MHD has attracted
much attention, and
various
numerical schemes
have been developed for \eqref{eq:MHD}.
Besides the standard difficulty in solving nonlinear hyperbolic conservation laws, an additional numerical challenge for the MHD system comes from the divergence-free condition \eqref{eq:2D:BxBy0}. Although \eqref{eq:2D:BxBy0}
holds for the exact solution as long as it does initially, it cannot be easily preserved by a numerical scheme (for $d \ge 2$). Numerical evidence and some analysis in the literature indicate that negligence in dealing with the condition \eqref{eq:2D:BxBy0} can lead to numerical instabilities or nonphysical features in the computed solutions, see e.g., \cite{Brackbill1980,Evans1988,BalsaraSpicer1999,Toth2000,Dedner2002,Li2005}.
Up to now, many numerical techniques have been proposed to control the divergence error of numerical magnetic field. They include but are not limited to:
the
eight-wave methods \cite{Powell1994,Chandrashekar2016},
the projection method \cite{Brackbill1980},
the hyperbolic divergence cleaning methods \cite{Dedner2002},
the locally divergence-free methods \cite{Li2005,Yakovlev2013},
the constrained transport method \cite{Evans1988} and its many variants
\cite{Ryu1998,BalsaraSpicer1999,Londrillo2000,Balsara2004,Londrillo2004,Torrilhon2004,Torrilhon2005,Rossmanith2006,Artebrant2008,Li2008,Balsara2009,Li2011,Li2012,Christlieb2014}.
The readers are also referred to an early survey in \cite{Toth2000}.
Another numerical challenge in the simulation of MHD is preserving the
positivity of density $\rho$ and pressure $p$.
In physics, these two quantities are non-negative. Numerically their
positivity is very critical, but not always satisfied by numerical solutions.
In fact, once the negative density or pressure is obtained in the simulations,
the discrete problem will become ill-posed due to the loss of hyperbolicity, causing the break-down of the simulation codes.
However, most of the existing MHD schemes are generally not positivity-preserving (PP), and thus may suffer from a large risk of failure when simulating MHD problems with strong discontinuity, low density, low pressure or low plasma-beta.
Several efforts have been made to reduce such risk. Balsara and Spicer \cite{BalsaraSpicer1999a} proposed a strategy to maintain positive pressure by switching the Riemann solvers for different wave situations. Janhunen \cite{Janhunen2000} designed a new 1D Riemann solver for the modified MHD system, and claimed its PP property by numerical experiments. Waagan \cite{Waagan2009} designed a positive linear reconstruction for second-order MUSCL-Hancock scheme, and conducted some 1D analysis based on the presumed PP property of the first-order scheme.
From a relaxation system, Bouchut et al. \cite{Bouchut2007,Bouchut2010} derived a multiwave approximate Riemann solver
for 1D ideal MHD,
and deduced sufficient conditions for the solver to satisfy discrete entropy inequalities and the PP property.
Recent years have witnessed some significant advances in developing bound-preserving high-order schemes for hyperbolic systems (e.g., \cite{zhang2010,zhang2010b,zhang2011b,Hu2013,Xu2014,Liang2014,WuTang2015,moe2017positivity,WuTang2017ApJS,Xu2017}).
High-order limiting techniques were well
developed in \cite{Balsara2012,cheng} for the finite volume or DG methods of MHD, to enforce the admissibility\footnote{Throughout this paper, the admissibility of a solution or state $\bf U$ means
that the density and pressure corresponding to the conservative vector $\bf U$ are both positive, see
Definition \ref{def:G}.} of
the reconstructed or DG polynomial solutions at certain nodal points. These techniques are based on a presumed proposition that the cell-averaged solutions computed by those schemes are always admissible.
Such proposition has not yet been rigorously proved for those methods,
although it could be deduced for the 1D schemes in \cite{cheng}
under some
assumptions (see Remark \ref{rem:chengassum}).
With the presumed PP property of the Lax-Friedrichs (LF) scheme, Christlieb et al. \cite{Christlieb,Christlieb2016} developed PP
high-order finite difference methods for \eqref{eq:MHD} by extending the parametrized flux limiters \cite{Xu2014,Xiong2016}.
It was demonstrated numerically that
the above PP treatments could enhance the robustness of MHD codes.
However,
as mentioned in \cite{Christlieb},
{\em there was no rigorous proof
to genuinely and
completely show the PP property of those or any other schemes for \eqref{eq:MHD} in the multi-dimensional cases.
Even for the simplest first-order schemes, such as
the LF scheme, the PP property is still unclear in theory. Moreover, it
is also
unanswered theoretically
whether the divergence-free condition \eqref{eq:2D:BxBy0} is
connected with the PP property of
schemes for \eqref{eq:MHD}. Therefore, it is significant to explore provably PP schemes for \eqref{eq:MHD} and
develop related theories for rigorous PP analysis.}
The aim of this paper is to carry out a
rigorous PP analysis
of conservative finite volume and DG schemes with the LF flux
for one- and multi-dimensional ideal MHD system \eqref{eq:MHD}.
Such analysis is extremely nontrivial and technical.
The challenges mainly come from
the intrinsic complexity of the system \eqref{eq:MHD}--\eqref{eq:2D:BxBy0},
as well as the unclear relation between the PP property and the divergence-free condition on the magnetic field.
Fortunately, we find an important novel starting point of the analysis, based on an equivalent form of
the admissible state set.
This form helps us to
successfully derive the generalized LF splitting properties, which couple
a discrete divergence-free (DDF) condition for the magnetic
field with
the convex combination of some LF splitting terms.
These properties imply
a theoretical connection between the PP property and the proposed DDF condition.
As the generalized LF splitting properties involve a number of strongly coupled states,
their discovery and proofs are extremely technical.
With the aid of these properties, we present the
rigorous PP analysis for finite volume and DG schemes on uniform Cartesian meshes.
Meanwhile, our analysis also reveals
that the DDF condition is necessary and crucial for achieving the PP property.
This finding is consistent with
the existing numerical evidences
that violating the divergence-free condition
may more easily cause negative pressure
(see e.g., \cite{Brackbill1980,Balsara2004,Rossmanith2006,Balsara2012}),
as well as our previous work on the relativistic MHD \cite{WuTangM3AS}.
Without considering the relativistic effect, the system \eqref{eq:MHD} yields unboundedness of velocities and poses difficulties essentially
different from the relativistic case.
It is also
worth mentioning that, as it will be shown,
the 1D LF scheme is not always PP
for piecewise
constant $B_1$, making some existing techniques \cite{zhang2010} for PP analysis inapplicable in the multi-dimensional ideal MHD case.
Contrary to the usual expectation, we also find that the 1D LF scheme with a standard numerical viscosity parameter is not always PP, no matter how small the CFL number is.
A proper
viscosity parameter should be estimated, introducing
additional difficulties
in the analysis.
Note that,
for the incompressible
flow system in the vorticity-stream function formulation,
there is also a
divergence-free condition (but) on
fluid velocity, i.e., the incompressibility condition,
which is crucial
in designing
schemes that satisfies the
maximum principle of vorticity,
see e.g. \cite{zhang2010}.
An important difference in our MHD case is that
our divergence-free quantity
(the magnetic field)
is also nonlinearly related to defining
the concerned positive quantity --- the internal energy or pressure, see \eqref{eq:DefG}.
The paper is organized as follows. Section \ref{sec:eqDef} gives several important properties of the admissible states for the PP analysis.
Sections \ref{sec:1Dpcp} and \ref{sec:2Dpcp} respectively study
1D and 2D PP schemes.
Numerical verifications are provided in Section \ref{sec:examples},
and the 3D extension
are given in Appendix \ref{sec:3D}.
Section \ref{sec:con} concludes the paper with several remarks.
\section{Admissible states}\label{sec:eqDef}
Under the condition \eqref{eq:assumpEOS}, it is natural to
define the set of
admissible states $\bf U$ of the ideal MHD
as follows.
\begin{definition}\label{def:G}
The set of admissible states of the ideal MHD
is defined by
\begin{equation}\label{eq:DefG}
{\mathcal G} = \left\{ {\bf U} = (\rho,{\bf m},{\bf B},E)^\top ~\Big|~ \rho > 0,~
{\mathcal E}( {\bf U} ) := E- \frac12 \left( \frac{|{\bf m}|^2}{\rho} + |{\bf B}|^2 \right) > 0 \right\},
\end{equation}
where ${\mathcal E} ({\bf U}) = \rho e$ denotes the internal energy.
\end{definition}
Given that the initial data are admissible, a
scheme is defined to be PP if the numerical solutions always stay in the set $\mathcal G$.
One can see from \eqref{eq:DefG} that it is difficult to
numerically preserve the positivity of ${\mathcal E}$,
whose computation nonlinearly involves
all the conservative variables.
In most of numerical methods, the conservative
quantities are themselves evolved according to their own conservation laws,
which are seemingly unrelated to and
numerically do not necessarily guarantee the positivity of the
computed ${\mathcal E}$. In theory, it is indeed a challenge to make a priori judgment
on whether a scheme is always PP under all circumstances or not.
\subsection{Basic properties}
To overcome the difficulties arising from the nonlinearity of
the function ${\mathcal E} ({\bf U})$,
we propose the following equivalent definition of ${\mathcal G}$.
\begin{lemma}[Equivalent definition]\label{theo:eqDefG}
The admissible state set ${\mathcal G}$ is equivalent to
\begin{equation}\label{eq:newDefG}
{\mathcal G}_* = \left\{ {\bf U} = (\rho,{\bf m},{\bf B},E)^\top ~\Big|~ \rho > 0,~~~
{\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} > 0,~\forall~{\bf v}^*, {\bf B}^* \in {\mathbb {R}}^3 \right\},
\end{equation}
where
$${\bf n}^* = \bigg( \frac{|{\bf v}^*|^2}2,~- {\bf v}^*,~-{\bf B}^*,~1 \bigg)^\top.$$
\end{lemma}
\begin{proof}
If ${\bf U} \in {\mathcal G}$, then $\rho>0$, and for any ${\bf v}^*, {\bf B}^* \in {\mathbb {R}}^3$,
\begin{align*}
{\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
= \frac{\rho}{2} \big| \rho^{-1} {\bf m} - {\bf v}^* \big|^2 + \frac{| {\bf B} - {\bf B}^* |^2}2 + {\mathcal E}( {\bf U} ) \ge {\mathcal E}( {\bf U} ) > 0,
\end{align*}
that is ${\bf U} \in {\mathcal G}_*$. Hence $ {\mathcal G} \subset {\mathcal G}_* $. On the other hand, if
${\bf U} \in {\mathcal G}_*$, then $\rho>0$, and taking
${\bf v}^* = \rho^{-1} {\bf m}$ and ${\bf B}^*={\bf B}$ gives
$
0< {\bf U} \cdot {\bf n}^* + {|{\bf B}^*|^2}/{2} = {\mathcal E}( {\bf U} ).
$
This means ${\bf U} \in {\mathcal G}$. Therefore $ {\mathcal G}_* \subset {\mathcal G}$. In conclusion, $ {\mathcal G} = {\mathcal G}_*$.
\end{proof}
{\em The two constraints in \eqref{eq:newDefG}
are both linear with respect to $\bf U$, making it more effective to analytically
verify the PP property of numerical schemes for ideal MHD.}
The convexity of admissible state set is very useful in bound-preserving analysis, because
it can help reduce the complexity of analysis
if the schemes can be rewritten into certain
convex combinations, see e.g., \cite{zhang2010b,Zhang2011,Wu2017}. For the ideal MHD, the convexity of
$\mathcal G_*$ or $\mathcal G$ can be easily shown by definition.
\begin{lemma}[Convexity]\label{theo:MHD:convex}
The set ${\mathcal G}_*$ is convex.
Moreover, $\lambda {\bf U}_1 + (1-\lambda) {\bf U}_0 \in {\mathcal G}_*$
for any ${\bf U}_1 \in {\mathcal G}_*, {\bf U}_0 \in \overline{\mathcal G}_*$ and $\lambda \in (0,1]$, where $\overline{\mathcal G}_*$ is the closure of ${\mathcal G}_*$.
\end{lemma}
\begin{proof}
The first component of $ {\bf U}_\lambda := \lambda {\bf U}_1 + (1-\lambda) {\bf U}_0 $ equals
$\lambda \rho_1 + (1-\lambda) \rho_0 >0$.
For $\forall~ {\bf v}^*, {\bf B}^* \in {\mathbb {R}}^3$,
${\bf U}_\lambda \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
= \lambda \big( {\bf U}_1 \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \big) + (1-\lambda) \big( {\bf U}_0 \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \big) > 0.$
This shows ${\bf U}_\lambda \in {\mathcal G}_*$.
The proof is completed by the definition of convexity.
\end{proof}
We also have the following orthogonal invariance, which can be verified directly.
\begin{lemma}[Orthogonal invariance] \label{lem:MHD:zhengjiao}
Let ${\bf T} :={\rm diag}\{1,{\bf T}_3,{\bf T}_3,1\}$, where
${\bf T}_3$ is any orthogonal matrix of size 3.
If ${\bf U} \in{\mathcal G}$, then
${\bf T} {\bf U} \in{\mathcal G}$.
\end{lemma}
We refer to the following property \eqref{eq:LFproperty} as the {\em LF splitting property},
\begin{equation}\label{eq:LFproperty}
{\bf U} \pm \frac{ {\bf F}_i ({\bf U}) }{\alpha} \in {\mathcal G}, \quad \forall~{\bf U} \in {\mathcal G},~\forall~\alpha \ge \chi {\mathscr{R}}_i ({\bf U}) ,
\end{equation}
where $\chi\ge1 $ is some constant, and
${\mathscr{R}}_i ({\bf U})$ is the spectral radius of the Jacobian matrix in $x_i$-direction, $i=1,2,3$.
For the ideal MHD system with the EOS \eqref{eq:gEOS}, one has \cite{Powell1994}
$$
{\mathscr{R}}_i ({\bf U}) = |v_i| + \mathcal{C}_i,
$$
with
$$
{\mathcal C}_i := \frac{1}{\sqrt{2}} \left[ \mathcal{C}_s^2 + \frac{ |{\bf B}|^2}{\rho} + \sqrt{ \left( \mathcal{C}_s^2 + \frac{ |{\bf B}|^2}{\rho} \right)^2 - 4 \frac{ \mathcal{C}_s^2 B_i^2}{\rho} } \right]^\frac12,
$$
where $\mathcal{C}_s=\sqrt{\gamma p/\rho}$ is the sound speed.
If true, the LF splitting property would be very useful in
analyzing the PP property of the schemes with the LF flux, see its roles in \cite{zhang2010b,WuTang2015,Wu2017}
for the equations of hydrodynamics.
Unfortunately, for the ideal MHD,
\eqref{eq:LFproperty} is untrue in general,
as evidenced numerically in \cite{cheng} for ideal gases.
In fact, one can disprove \eqref{eq:LFproperty},
see the proof of the following proposition in Appendix \ref{sec:proof}.
\begin{proposition}\label{lemma:2.8}
The LF splitting property \eqref{eq:LFproperty} does not hold in general.
\end{proposition}
\subsection{Generalized LF splitting properties} \label{sec:GLFs}
Since \eqref{eq:LFproperty} does not hold, we would like to seek some alternative properties which are weaker than \eqref{eq:LFproperty}.
By considering the convex combination of some LF splitting terms,
we discover the {\em generalized LF splitting properties} under
some ``discrete divergence-free'' condition for the magnetic field.
{\em As one of the most highlighted points of this paper, the discovery and proofs of such properties are very nontrivial and extremely technical.}
\subsubsection{A constructive inequality}
We first construct an inequality, which will play a pivotal role
in establishing the generalized LF splitting properties.
\begin{lemma}\label{theo:MHD:LLFsplit}
If ${\bf U}, \tilde{\bf U} \in {\mathcal G}$, then the inequality
\begin{equation}\label{eq:MHD:LLFsplit}
\bigg( {\bf U} - \frac{ {\bf F}_i({\bf U})}{\alpha}
+
\tilde{\bf U} + \frac{ {\bf F}_i(\tilde{\bf U})}{\alpha}
\bigg) \cdot {\bf n}^* + |{\bf B}^*|^2
+ \frac{ B_i - \tilde B_i }{\alpha} ({\bf v}^* \cdot {\bf B}^*) > 0 ,
\end{equation}
holds for any ${\bf v}^*, {\bf B}^* \in {\mathbb{R}}^3$ and $|\alpha|>\alpha_{i} ({\bf U},\tilde{\bf U}) $, where $i\in\{1,2,3\}$, and
\begin{gather}\label{eq:alpha_i}
\alpha_{i} ({\bf U},\tilde{\bf U}) =
\min_{\sigma \in \mathbb{R}} \alpha_{i} ( {\bf U}, \tilde {\bf U};\sigma ),
\\ \nonumber
\alpha_{i} ( {\bf U}, \tilde {\bf U};\sigma )
=
\max\big\{ |v_i|+ {\mathscr{C}}_i,|\tilde v_i| + \tilde {\mathscr{C}}_i ,| \sigma v_i + (1-\sigma) \tilde v_i | + \max\{ {\mathscr{C}}_i , \tilde {\mathscr{C}}_i \} \big\}
+ f ( {\bf U}, \tilde {\bf U}; \sigma ),
\end{gather}
with
\begin{align*}
& f( {\bf U}, \tilde {\bf U}; \sigma) = \frac{ |\tilde{\bf B}-{\bf B}| }{\sqrt{2}} \sqrt{ \frac{\sigma^2}{\rho} + \frac{ (1-\sigma)^2 }{\tilde \rho} },
\\
&
{\mathscr{C}}_i = \frac{1}{\sqrt{2}} \left[ {\mathscr{C}}_s^2 + \frac{ |{\bf B}|^2}{\rho} + \sqrt{ \left( {\mathscr{C}}_s^2 + \frac{ |{\bf B}|^2}{\rho} \right)^2 - 4 \frac{ {\mathscr{C}}_s^2 B_i^2}{\rho} } \right]^\frac12,
\end{align*}
and ${\mathscr{C}}_s=\frac{p}{\rho \sqrt{2e}}$.
\end{lemma}
\begin{proof}
{\tt (i)}. We first prove \eqref{eq:MHD:LLFsplit} for $i=1$.
Let define
\begin{align*}
& \Pi_u = \big( {\bf U}
+
\tilde{\bf U}
\big) \cdot {\bf n}^* + |{\bf B}^*|^2, \quad
\Pi_f= \big( {\bf F}_1({\bf U})
- {\bf F}_1(\tilde{\bf U})
\big) \cdot {\bf n}^* - \big( B_1 - \tilde B_1 \big) ({\bf v}^* \cdot {\bf B}^*) .
\end{align*}
Then it only needs to show
\begin{equation}\label{eq:needproof1}
\frac{|\Pi_f|}{\Pi_u} \le \alpha_1 ({\bf U},\tilde{\bf U}),
\end{equation}
by noting that
\begin{equation}\label{eq:Piu}
\Pi_u = | {\bm \theta}|^2 > 0 ,
\end{equation}
where the nonzero vector ${\bm \theta} \in {\mathbb{R}}^{14}$ is defined as
$$
{\bm \theta}= \frac1{\sqrt{2}} \Big(
\sqrt{\rho} ( {\bf v} - {\bf v}^* ),~
{\bf B}-{\bf B}^*,~
\sqrt{2 \rho e},~
\sqrt{\tilde \rho} ( \tilde {\bf v} - {\bf v}^* ),~
\tilde{\bf B}-{\bf B}^*,~
\sqrt{2 \tilde \rho \tilde e}
\Big)^\top.
$$
The proof of \eqref{eq:needproof1} is divided into the following two steps.
{\tt Step 1}. Reformulate $\Pi_f$ into a quadratic form in the variables $\theta_j, 1\le j \le 14$.
We require that the coefficients of the quadratic form do not depends on ${\bf v}^*$ and ${\bf B}^*$.
This is very nontrivial and becomes the key step of the proof.
We first arrange $\Pi_f$ by a technical decomposition
\begin{equation}\label{eq:3parts}
\Pi_f = \Pi_1 + \Pi_2 + \Pi_3 + (\Pi_4-\tilde \Pi_4) ,
\end{equation}
where
\begin{align*}
& \Pi_j = \frac12 v_1^* \big( B_j^2-\tilde B_j^2 \big) - v_1^* B_j^* ( B_j-\tilde B_j),\quad j=1, 2, 3, \\
& \Pi_4 = \frac{\rho v_1 }2 |{\bf v}-{\bf v}^*|^2 + v_1 \rho e + p ( v_1 - v_1^* )
+ \sum_{j=2}^3 ( B_j (v_1-v_1^*)
- B_1( v_j -v_j^* ) ) ( B_j - B_j^* ),\\
& \tilde \Pi_4 = \frac{\tilde \rho \tilde v_1 }2 |\tilde{\bf v}-{\bf v}^*|^2 + \tilde v_1 \tilde \rho \tilde e + \tilde p ( \tilde v_1 - v_1^* )
+ \sum_{j=2}^3 ( \tilde B_j ( \tilde v_1-v_1^*)
- \tilde B_1( \tilde v_j -v_j^* ) ) ( \tilde B_j - B_j^* ).
\end{align*}
One can immediately rewrite $\Pi_4$ and $\tilde \Pi_4$ as
\begin{align*}
&\Pi_4 = v_1 \Big( \sum_{j=1}^3 \theta_j^2 + \theta_7^2 \Big)
+ 2{\mathscr{C}}_s
\theta_1 \theta_7 + \frac{ 2 B_2} { \sqrt{\rho} } \theta_1 \theta_5 + \frac{ 2 B_3} { \sqrt{\rho} } \theta_1 \theta_6
- \frac{2B_1}{\sqrt{\rho}} (\theta_2 \theta_5 + \theta_3 \theta_6),
\\
&
\tilde \Pi_4 = \tilde v_1 \Big( \sum_{j=8}^{10} \theta_j^2 + \theta_{14}^2 \Big)
+
2 \tilde {\mathscr{C}}_s
\theta_8 \theta_{14}
+ \frac{ 2 \tilde B_2} { \sqrt{\tilde\rho} } \theta_8 \theta_{12} + \frac{ 2 \tilde B_3} { \sqrt{\tilde \rho} } \theta_8 \theta_{13}
- \frac{2\tilde B_1}{\sqrt{\tilde \rho}} (\theta_9 \theta_{12} + \theta_{10} \theta_{13}).
\end{align*}
After a careful investigation, we find that $\Pi_j$, $j=1,2,3$, can be reformulated as
\begin{equation*}
\begin{split}
\Pi_j & = \sigma_j \frac{\tilde B_j - B_j} { \sqrt{\rho} } ( \theta_1 \theta_{j+3} +
\theta_1 \theta_{j+10} )
+ (1-\sigma_j) \frac{\tilde B_j - B_j} { \sqrt{\tilde \rho} } (\theta_{8} \theta_{j+3} + \theta_{8} \theta_{j+10} )
\\
& \quad + \big( \sigma_j v_1 + (1-\sigma_j) \tilde v_1 \big) \theta_{j+3}^2 - \big( \sigma_j v_1 + (1-\sigma_j) \tilde v_1 \big) \theta_{j+10}^2 ,
\end{split}
\end{equation*}
where $\sigma_1$, $\sigma_2$ and $\sigma_3$ can be taken as any real numbers.
In summary, we have reformulated $\Pi_f$ into a quadratic form in the variables $\theta_j, 1\le j \le 14$.
{\tt Step 2}. Estimate the upper bound of $\frac{|\Pi_f|}{\Pi_u}$.
There are several approaches to estimate the bound, resulting in
different formulas. One sharp upper bound is the spectral radius of the symmetric matrix associated with the above quadratic form, but
cannot be formulated explicitly and computed easily in practice.
An explicit sharp upper bound
is $\alpha_{1} ({\bf U},\tilde{\bf U})$ in \eqref{eq:alpha_i}. It is estimated as follows. We first notice that
\begin{align*}
\Pi_4 & = v_1 \Big( \sum_{j=1}^3 \theta_j^2 + \theta_7^2 \Big)
+ {\bm \vartheta}_6^\top {\bf A}_6 {\bm \vartheta}_6 ,
\end{align*}
where ${\bm \vartheta}_6 = (\theta_1,\theta_2,\theta_3,\theta_5,\theta_6,\theta_7)^\top$, and
$$
{\bf A}_6 = \begin{pmatrix}
0 & 0 & 0 & B_2 \rho^{-\frac12} & B_3 \rho^{-\frac12} & {\mathscr{C}}_s \\
0 & 0 & 0 & -B_1 \rho^{-\frac12} & 0 & 0 \\
0 & 0 & 0 & 0 & -B_1 \rho^{-\frac12} & 0 \\
B_2 \rho^{-\frac12} & -B_1 \rho^{-\frac12} & 0 & 0 & 0 & 0 \\
B_3 \rho^{-\frac12} & 0 & -B_1 \rho^{-\frac12} & 0 & 0 & 0 \\
{\mathscr{C}}_s & 0 & 0 & 0 & 0 & 0
\end{pmatrix}.
$$
The spectral radius
of ${\bf A}_6$ is $ {\mathscr{C}}_1$. This gives the following estimate
\begin{equation} \label{eq:PI4}
\begin{aligned}
|\Pi_4| & \le |v_1| \bigg( \sum_{j=1}^3
\theta_{j}^2 + \theta_7^2 \bigg)
+ | {\bm \vartheta}_6^\top {\bf A}_6 {\bm \vartheta}_6 |
\le |v_1| \bigg( \sum_{j=1}^3
\theta_{j}^2 + \theta_7^2 \bigg) + {\mathscr{C}}_1 |{\bm \vartheta}_6 |^2
\\
& = ( |v_1| + {\mathscr{C}}_1 ) \bigg( \sum_{j=1}^3
\theta_{j}^2 + \theta_7^2 \bigg) + {\mathscr{C}}_1 \big( \theta_5^2 + \theta_6^2 \big).
\end{aligned}
\end{equation}
Similarly, we have
\begin{align}
|\tilde \Pi_4| \le
( |\tilde v_1| + \tilde {\mathscr{C}}_1 ) \bigg( \sum_{j=8}^{10}
\theta_{j}^2 + \theta_{14}^2 \bigg) + \tilde {\mathscr{C}}_1 \big( \theta_{12}^2 + \theta_{13}^2 \big).
\label{eq:tPI4}
\end{align}
Let then focus on the first three terms at the right hand of \eqref{eq:3parts} and rewrite their summation as
\begin{align}\label{eq:proofwkleq}
\Pi_1 + \Pi_2 + \Pi_3 = {\bm \vartheta}_8^\top {\bf A}_8 {\bm \vartheta}_8 + \sum_{j=1}^3 \big( \sigma_j v_1 + (1-\sigma_j) \tilde v_1 \big)
\big( \theta_{j+3}^2 - \theta_{j+10}^2 \big)
,
\end{align}
where ${\bm \vartheta}_8=(\theta_1,\theta_4,\theta_5,\theta_6,\theta_8,\theta_{11},\theta_{12},\theta_{13})^\top$, and
$$
{\bf A}_8 = \frac12
\begin{pmatrix}
~0~ & ~{\bm \psi}~ & ~0~ & ~{\bm \psi}~ \\
~{\bm \psi}^\top~ & ~{\bf O}~ & ~\tilde{\bm \psi}^\top~ & ~{\bf O}~ \\
~0~ & ~\tilde{\bm \psi}~ & ~0~ & ~\tilde {\bm \psi}~ \\
~{\bm \psi}^\top~ & ~{\bf O}~ & ~\tilde {\bm \psi}^\top~ & ~{\bf O}~
\end{pmatrix},
$$
with
${\bf O}$ denoting the $3\times 3$ null matrix, and
\begin{align*}
& {\bm \psi}=\rho^{-\frac12} \left( \sigma_1(\tilde B_1- B_1), \sigma_2(\tilde B_2-B_2), \sigma_3 (\tilde B_3-B_3) \right),
\\
& \tilde {\bm \psi}=\tilde \rho^{-\frac12} \left( (1-\sigma_1)(\tilde B_1- B_1), (1-\sigma_2)(\tilde B_2-B_2), (1-\sigma_3) (\tilde B_3-B_3) \right).
\end{align*}
Some algebraic manipulations show that the spectral radius of ${\bf A}_8$ is
$$
\varrho ({\bf A}_8)=\frac12 \left[ |{\bm \psi}|^2 + | \tilde {\bm \psi}|^2
+ \sqrt{ ( |{\bm \psi}|^2 - | \tilde {\bm \psi}|^2 )^2 + 4 ( {\bm \psi} \cdot \tilde {\bm \psi} )^2 } \right]^\frac12.
$$
It then follows from \eqref{eq:proofwkleq} that, for $\forall \sigma_1,\sigma_2,\sigma_3\in \mathbb{R}$,
\begin{align*}
|\Pi_1 + \Pi_2 + \Pi_3| & \le \varrho ({\bf A}_8) | {\bm \vartheta}_8|^2 + \sum_{j=1}^3 \big| \sigma_j v_1 + (1-\sigma_j) \tilde v_1 \big|
\big| \theta_{j+3}^2 - \theta_{j+10}^2 \big|.
\end{align*}
For simplicity, we set $\sigma_1=\sigma_2=\sigma_3=\sigma$, then
$\varrho ({\bf A}_8)= f({\bf U}, \tilde {\bf U};\sigma),$
and
\begin{align}\nonumber
|\Pi_1 + \Pi_2 + \Pi_3| & \le f({\bf U}, \tilde {\bf U};\sigma) | {\bm \vartheta}_8|^2 +
| \sigma v_1 + (1-\sigma) \tilde v_1 |
\sum_{j=1}^3
\big| \theta_{j+3}^2 - \theta_{j+10}^2 \big|
\\
& \le f({\bf U}, \tilde {\bf U};\sigma) | {\bm \theta}|^2 +
{| \sigma v_1 + (1-\sigma)\tilde v_1 | }
\sum_{j=1}^3
\big( \theta_{j+3}^2 + \theta_{j+10}^2 \big). \label{eq:PI123}
\end{align}
Combining \eqref{eq:3parts}--
\eqref{eq:tPI4} and \eqref{eq:PI123}, we have
\begin{align*}
\begin{split}
|\Pi_f| & \le ( |v_1| + {\mathscr{C}}_1 ) \bigg( \sum_{j=1}^3
\theta_{j}^2 + \theta_7^2 \bigg)
+ ( |\tilde v_1| + \tilde {\mathscr{C}}_1 ) \bigg( \sum_{j=8}^{10}
\theta_{j}^2 + \theta_{14}^2 \bigg) + f({\bf U}, \tilde {\bf U};\sigma) | {\bm \theta}|^2
\\
& \quad + {\mathscr{C}}_1 \big( \theta_5^2 + \theta_6^2 \big) + \tilde {\mathscr{C}}_1 \big( \theta_{12}^2 + \theta_{13}^2 \big) +
| \sigma v_1 + (1-\sigma)\tilde v_1 |
\sum_{j=1}^3
\big( \theta_{j+3}^2 + \theta_{j+10}^2 \big)
\end{split}
\\
\begin{split}
& \le ( |v_1| + {\mathscr{C}}_1 ) \bigg( \sum_{j=1}^3
\theta_{j}^2 + \theta_7^2 \bigg)
+ ( |\tilde v_1| + \tilde {\mathscr{C}}_1 ) \bigg( \sum_{j=8}^{10}
\theta_{j}^2 + \theta_{14}^2 \bigg) + f({\bf U}, \tilde {\bf U};\sigma) | {\bm \theta}|^2
\\
& \quad +
\Big( | \sigma v_1 + (1-\sigma)\tilde v_1 | + {\mathscr{C}}_1 \Big) \sum_{j=4}^6
\theta_{j}^2
+\Big( | \sigma v_1 + (1-\sigma)\tilde v_1 | + \tilde {\mathscr{C}}_1 \Big) \sum_{j=11}^{13}
\theta_{j}^2
\end{split}
\\
\begin{split}
& \le \alpha_1 ( {\bf U},\tilde {\bf U};\sigma )~| {\bm \theta}|^2 = \alpha_1 ( {\bf U},\tilde {\bf U};\sigma )~\Pi_u,
\end{split}
\end{align*}
for all $\sigma \in \mathbb{R}$. Hence
$$|\Pi_f| \le \Pi_u \min_{\sigma \in \mathbb{R} } \alpha_1 ( {\bf U},\tilde {\bf U};\sigma ) = \Pi_u \alpha_1 ( {\bf U},\tilde {\bf U} ),$$
that is, the inequality \eqref{eq:needproof1} holds. The proof for the case of $i=1$ is completed.
{\tt (ii)}. We then verify the inequality
\eqref{eq:MHD:LLFsplit} for the cases $i=2$ and $3$, by using the inequality \eqref{eq:MHD:LLFsplit} for the case $i=1$ as well as the
orthogonal invariance in Lemma \ref{lem:MHD:zhengjiao}.
For the case of $i=2$, we introduce an orthogonal matrix ${\bf T} = {\rm diag} \{ 1, {\bf T}_3, {\bf T}_3, 1\}$ with
${\bf T}_3 := ({\bf e}_2^\top, {\bf e}_1^\top, {\bf e}_3^\top)$, where ${\bf e}_\ell $ is the $\ell$-th row of the unit matrix of size 3.
We then have ${\bf T} {\bf U}, {\bf T} \tilde {\bf U} \in {\mathcal G}$ by Lemma \ref{lem:MHD:zhengjiao}.
Let ${\mathcal H}_i({\bf U},\tilde{ {\bf U}},{\bf v}^*,{\bf B}^*,\alpha)$ denote the left-hand side term of \eqref{eq:MHD:LLFsplit}.
Using \eqref{eq:MHD:LLFsplit} with $i=1$ for ${\bf T} {\bf U}, {\bf T} \tilde {\bf U}, {\bf v}^* {\bf T}_3 , {\bf B}^* {\bf T}_3$, we have
\begin{equation}\label{eq:MHD:LLFsplit111}
{\mathcal H}_1( {\bf T} {\bf U}, {\bf T} {\tilde {\bf U}} , {\bf v}^* {\bf T}_3 , {\bf B}^* {\bf T}_3 ,\alpha) > 0 ,
\end{equation}
for any $\alpha>\alpha_1 ( {\bf T} {\bf U}, {\bf T} {\tilde {\bf U}} ) = \alpha_2 ({\bf U},\tilde{\bf U}) $.
Utilizing ${\bf F}_1 ({\bf T}{ \bf U} ) = {\bf T} {\bf F}_2 ({\bf U})$ and the orthogonality of $\bf T$ and ${\bf T}_3$, we find that
\begin{align} \nonumber
{\mathcal H}_1( {\bf T} {\bf U}, {\bf T} {\tilde {\bf U}} , {\bf v}^* {\bf T}_3 , {\bf B}^* {\bf T}_3 ,\alpha)
= {\mathcal H}_2({\bf U},\tilde{ \bf U},{\bf v}^*,{\bf B}^*,\alpha).
\end{align}
Thus \eqref{eq:MHD:LLFsplit111} implies
\eqref{eq:MHD:LLFsplit}
for $i=2$.
Similar arguments for $i=3$. The proof is completed.
\end{proof}
\begin{remark}
In practice, it is not easy to determine the minimum value in
\eqref{eq:alpha_i}. Since $\alpha_i({\bf U},\tilde{\bf U})$ only plays the role of a lower bound,
one can replace it with $\alpha_i({\bf U},\tilde{\bf U};\sigma)$ for a special
$\sigma$. For example, taking $\sigma=\frac{\rho}{\rho+\tilde \rho}$
minimizes $f({\bf U},\tilde{\bf U};\sigma)$ and gives
{\small
\begin{equation*}
\alpha_{i} \bigg({\bf U},\tilde{\bf U};\frac{\rho}{\rho+\tilde \rho} \bigg) =
\max\bigg\{ |v_i|+ {\mathscr{C}}_i, |\tilde v_i| + \tilde {\mathscr{C}}_i , \frac{|\rho v_i + \tilde \rho \tilde v_i |}{\rho + \tilde \rho} + \max\{ {\mathscr{C}}_i , \tilde {\mathscr{C}}_i \} \bigg\}
+ \frac{ |{\bf B}-\tilde{\bf B}| }{ \sqrt{2 (\rho + \tilde \rho)} }.
\end{equation*}}
Taking $\sigma=\frac{ \sqrt{\rho}}{\sqrt{\rho}+\sqrt{\tilde \rho}}$ gives
{\small
\begin{equation*}
\alpha_{i} \bigg({\bf U},\tilde{\bf U};\frac{\sqrt{\rho}}{\sqrt{\rho}+\sqrt{\tilde \rho}} \bigg) =
\max\bigg\{ |v_i|+ {\mathscr{C}}_i, |\tilde v_i| + \tilde {\mathscr{C}}_i , \frac{| \sqrt{\rho} v_i + \sqrt{\tilde \rho} \tilde v_i |} {\sqrt{\rho}+\sqrt{\tilde \rho}} + \max\{ {\mathscr{C}}_i , \tilde {\mathscr{C}}_i \} \bigg\}
+ \frac{ |{\bf B}-\tilde{\bf B}| }{ \sqrt{\rho} + \sqrt{\tilde \rho} }.
\end{equation*}
}
\end{remark}
Let $a_i := \max\{ {\mathscr{R}}_i ({\bf U}), {\mathscr{R}}_i(\tilde{\bf U}) \}$. For the gamma-law EOS, the following proposition shows that
$\alpha_{i} ({\bf U},\tilde{\bf U}) < 2a_i$ and
$\alpha_{i} ({\bf U},\tilde{\bf U}) < a_i + {\mathcal O}( |{\bf U} -\tilde {\bf U}| )$, $i=1,2,3$. When ${\bf U}=\tilde {\bf U}$ with zero magnetic field,
$\alpha_{i} ( {\bf U}, \tilde{\bf U} )=|v_i| + \frac{p}{\rho \sqrt{2e}}$,
which is consistent with the bound in
the LF splitting property for the Euler equations with
a general EOS \cite{Zhang2011}.
\begin{proposition}\label{prop:alpha}
For any admissible states ${\bf U},\tilde{\bf U}$ of an ideal gas,
it holds
\begin{gather} \label{eq:aaaaWKL}
\alpha_{i} ({\bf U},\tilde{\bf U})
<2a_i,
\\ \label{eq:bbbbWKL}
\alpha_{i} ({\bf U},\tilde{\bf U})
< a_i + \min \big\{ \big| | v_i| - |\tilde v_i| \big|, \big| {\mathscr{C}}_i - \tilde {\mathscr{C}}_i \big| \big\}
+ \frac{ |{\bf B}-\tilde{\bf B}| }{ \sqrt{2 (\rho + \tilde \rho)} }.
\end{gather}
\end{proposition}
\begin{proof}
The inequality \eqref{eq:aaaaWKL} can be shown as follows
{\small \begin{align*}
\alpha_{i} ({\bf U},\tilde{\bf U}) & \le
\alpha_{i} \bigg({\bf U},\tilde{\bf U};\frac{\sqrt{\rho}}{\sqrt{\rho}+\sqrt{\tilde \rho}} \bigg)
\\
& \le
\max\Big\{ |v_i|+ {\mathscr{C}}_i,~|\tilde v_i| +
\tilde {\mathscr{C}}_i \Big\}
+ \frac{| \sqrt{\rho} v_i + \sqrt{\tilde \rho} \tilde v_i |} {\sqrt{\rho}+\sqrt{\tilde \rho}}
+ \frac{ |{\bf B}-\tilde{\bf B}| }{ \sqrt{\rho} + \sqrt{\tilde \rho} }
\\
& <
a_i
+ \frac{\sqrt{\rho} } {\sqrt{\rho}+\sqrt{\tilde \rho}}
\bigg( |v_i| + \frac{|\bf B|}{\sqrt{\rho}} \bigg)
+ \frac{\sqrt{\tilde \rho} } {\sqrt{\rho}+\sqrt{\tilde \rho}}
\bigg( |\tilde v_i| + \frac{|\tilde {\bf B}|}{\sqrt{\tilde \rho}} \bigg)
\\
& \le
a_i + \max \bigg\{ | v_i| + \frac{|\bf B|}{\sqrt{\rho}},
|\tilde v_i| + \frac{|\tilde {\bf B}|}{\sqrt{\tilde \rho}} \bigg\}
\\
&
\le a_i + \max \bigg\{ | v_i| + \frac{{\mathcal C}_i}{\sqrt{2}},
|\tilde v_i| + \frac{|\tilde {\mathcal C}_i|}{\sqrt{2}} \bigg\}
< 2 a_i,
\end{align*}}
where we have used ${\mathscr{C}}_i < {\mathcal{C}}_i$ because of ${\mathscr{C}}_s = \sqrt{ \frac{(\gamma-1)p}{2\rho } }< {\mathcal{C}}_s$.
We then turn to prove \eqref{eq:bbbbWKL}.
Using the triangle inequality, one can easily show that
\begin{align*}
&| v_i| + \tilde {\mathscr{C}}_i
\le \min \big\{ \big| | v_i| - |\tilde v_i| \big|, \big| {\mathscr{C}}_i - \tilde {\mathscr{C}}_i \big| \big\} +
\max\big\{ |v_i| + {\mathscr{C}}_i, |\tilde v_i| + \tilde {\mathscr{C}}_i \big\},
\\
&|\tilde v_i| + {\mathscr{C}}_i
\le \min \big\{ \big| | v_i| - |\tilde v_i| \big|, \big| {\mathscr{C}}_i - \tilde {\mathscr{C}}_i \big| \big\} +
\max \big\{ |v_i| + {\mathscr{C}}_i, |\tilde v_i| + \tilde {\mathscr{C}}_i \big\}.
\end{align*}
Therefore,
\begin{align*}
& \max\bigg\{ |v_i|+ {\mathscr{C}}_i,|\tilde v_i| + \tilde {\mathscr{C}}_i ,\frac{|\rho v_i + \tilde \rho \tilde v_i |}{\rho + \tilde \rho} + \max\{ {\mathscr{C}}_i , \tilde {\mathscr{C}}_i \} \bigg\}
\\
& \quad \le \max \big\{
|v_i|+ {\mathscr{C}}_i,|\tilde v_i| + \tilde {\mathscr{C}}_i ,
|\tilde v_i|+ {\mathscr{C}}_i,| v_i| + \tilde {\mathscr{C}}_i
\big\}
\\
& \quad \le \max\big\{ |v_i|+ {\mathscr{C}}_i,|\tilde v_i| + \tilde {\mathscr{C}}_i \big\} + \min \big\{ \big| | v_i| - |\tilde v_i| \big|, \big| {\mathscr{C}}_i - \tilde {\mathscr{C}}_i \big| \big\}
\\
& \quad
< a_i + \min \big\{ \big| | v_i| - |\tilde v_i| \big|, \big| {\mathscr{C}}_i - \tilde {\mathscr{C}}_i \big| \big\}.
\end{align*}
Then using $\alpha_{i} ({\bf U},\tilde{\bf U} ) \le \alpha_{i} \big({\bf U},\tilde{\bf U};\frac{\rho}{\rho+\tilde \rho} \big)$ completes the proof.
\end{proof}
\begin{remark}
It is worth emphasizing the importance of the last term at the left-hand side of \eqref{eq:MHD:LLFsplit}.
This term is extremely technical, necessary and crucial in deriving
the generalized LF splitting properties.
Including this term becomes one of the breakthrough points in
this paper.
The value of this term is not always positive or negative. However, without this term, the inequality \eqref{eq:MHD:LLFsplit} does not hold, even if $\alpha_i$ is replaced with $\chi\alpha_i$ for any constant $\chi \ge 1$.
More importantly, this term can be canceled out dexterously
under the ``discrete divergence-free'' condition \eqref{eq:descrite1DDIV} or \eqref{eq:descrite2DDIV},
see the proofs of generalized LF splitting properties in
the following theorems.
\end{remark}
Let us figure out
some facts and observations.
Note that the inequality \eqref{theo:MHD:LLFsplit} in Lemma \ref{theo:MHD:LLFsplit}
involves two states (${\bf U}$ and $\tilde {\bf U}$).
In the relativistic MHD case (Lemma 2.9 in \cite{WuTangM3AS}),
we derive the
generalized LF splitting properties by an inequality, which is similar to \eqref{theo:MHD:LLFsplit} but involves only one state.
It seems natural to conjecture a similar ``one-state'' inequality for the ideal MHD case in the following form
\begin{equation}\label{eq:MHD:LLFsplit:one-state}
\bigg( {\bf U} + \frac{ {\bf F}_i({\bf U})}{\alpha}
\bigg) \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
+ \frac{ 1 }{\alpha} \bigg( v_i^* \frac{|{\bf B}^*|^2}{2} - B_i({\bf v}^* \cdot {\bf B}^*) \bigg) > 0, \quad
\forall {\bf U} \in {\mathcal G},~~
\forall {\bf v}^*,{\bf B}^* \in {\mathbb R}^3,
\end{equation}
for any $ |\alpha| > \widehat \alpha_i ({\bf U})$,
where
the lower bound $\widehat \alpha_i ({\bf U})$ is expected to be independent of ${\bf v}^*$ and ${\bf B}^*$.
For special relativistic MHD, the lower bound can be taken as {\em the speed of light} \cite{WuTangM3AS},
which is a constant, and brings us much convenience because any velocities
(e.g., $|{\bf v}|$ and $|{\bf v}^*|$) are uniformly smaller than such a constant according to the theory of special relativity.
However, unfortunately for the ideal MHD, it is {\em impossible} to
establish \eqref{eq:MHD:LLFsplit:one-state}
for any $ |\alpha| > \widehat \alpha_i ({\bf U})$ with a desired bound $ \widehat \alpha_i ({\bf U})$ only dependent on ${\bf U}$.
This is because the non-relativistic velocities are generally unbounded. As $v_i^* {\rm sign} (-\alpha)$ and $|{\bf B}^*|$ approach $+\infty$,
the negative cubic term ${v_i^* |{\bf B}^*|^2}/{2\alpha}$ in \eqref{eq:MHD:LLFsplit:one-state}
dominates the sign and cannot be controlled by any other terms at the left-hand side \eqref{eq:MHD:LLFsplit:one-state}.
Hence, the construction of
generalized LF splitting properties in the ideal MHD case
has difficulties essentially different from the special MHD case.
If not requiring $\widehat \alpha_i$ to be independent of ${ v}_i^*$, we have the following proposition with the proof displayed in Appendix \ref{sec:proofwkl111}.
\begin{proposition}\label{lem:fornewGLF}
The inequality \eqref{eq:MHD:LLFsplit:one-state}
holds for any $ |\alpha| > \widehat \alpha_i ({\bf U},{v}_i^*)$
and any $i \in \{1,2,3\}$, where
$$
\widehat \alpha_i ({\bf U},{v}_i^*) =
\max \big\{ |v_i|, |v_i^*| \big\}
+ {\mathscr{C}}_i.
$$
\end{proposition}
\subsubsection{Derivation of generalized LF splitting properties} \label{sec:gLxF}
We first present the 1D generalized LF splitting property.
\begin{theorem}[1D generalized LF splitting]\label{theo:MHD:LLFsplit1D}
If $ \hat{\bf U}= (\hat\rho, \hat{\bf m}, \hat{\bf B}, \hat E)^{\top} $ and $ \check{\bf U}=(\check \rho, \check{\bf m}, \check{\bf B}, \check{E})^{\top} $ both belong to $\mathcal G$, and satisfy
1D
``discrete divergence-free'' condition
\begin{equation}\label{eq:descrite1DDIV}
\hat{B_1} - \check {B_1}=0,
\end{equation}
then for any $\alpha > \alpha_1 (\hat{\bf U},\check{\bf U}) $ it holds
\begin{equation}\label{eq:MHD:LLFsplit1D}
\overline{\bf U}:=\frac{1}{2} \bigg( \hat{\bf U} - \frac{ {\bf F}_1(\hat{\bf U})}{\alpha}
+
\check{\bf U} + \frac{ {\bf F}_1(\check{\bf U})}{\alpha} \bigg)
\in {\mathcal G}.
\end{equation}
\end{theorem}
\begin{proof}
The first component of $\overline{\bf U}$ equals $\frac12 \big( \hat \rho \big(1-\frac{\hat v_1}{\alpha}\big) + \check\rho \big(1+\frac{\check v_1}{\alpha}\big) \big) > 0$.
For any ${\bf v}^*,{\bf B}^*\in \mathbb{R}^3$, utilizing Lemma \ref{theo:MHD:LLFsplit} and the condition \eqref{eq:descrite1DDIV} gives
{\small \begin{align*}
\overline{\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
= \frac12 \bigg( \hat{\bf U} - \frac{ {\bf F}_1(\hat{\bf U})}{\alpha}
+
\check{\bf U} + \frac{ {\bf F}_1(\check{\bf U})}{\alpha}
\bigg) \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
> \frac{ \check B_1 - \hat B_1 }{2\alpha} ({\bf v}^* \cdot {\bf B}^*) = 0.
\end{align*}}
This implies $\overline{\bf U}\in{\mathcal G}_* = {\mathcal G}$.
\end{proof}
\begin{remark}\label{rem:exa}
As indicated by Proposition \ref{prop:alpha},
the bound $\alpha_1 (\hat{\bf U},\check{\bf U}) $ for $\alpha$
can be very close to $a_1 = \max\{ {\mathscr{R}}_1 (\hat{\bf U}), {\mathscr{R}}_1(\check{\bf U}) \}$, which
is the numerical viscosity coefficient in the standard local LF scheme.
Nevertheless, \eqref{eq:MHD:LLFsplit1D} does not hold for $\alpha = a_1$ in general.
A counterexample can be given by
considering
the following admissible states of ideal gas with $\gamma = 1.4$ and $\hat B_1 = \check B_1$,
\begin{equation}\label{eq:contourex}
\begin{cases}
\hat{\bf U}= ( 0.2,0,0.2,0,10,5,0,62.625 )^\top,
\\
\check{\bf U}= ( 0.32,0,-0.32,0,10,10,0,100.16025 )^\top.
\end{cases}
\end{equation}
For \eqref{eq:contourex} and $\alpha= a_1 $, one can verify that
$\overline{\bf U}$ in \eqref{eq:MHD:LLFsplit1D}
satisfies ${\mathcal E}(\overline{\bf U})
<-0.05$ and
$\overline{\bf U} \notin {\mathcal G}$.
\end{remark}
\begin{remark}\label{rem:chengassum}
The proof of Lemma 2.1 in \cite{cheng} implies that
\begin{equation}\label{eq:statebycheng}
{\bf U}_\lambda := \hat{\bf U} - \lambda \big( {\bf F}_1 ( \hat{\bf U} ) + a_1 \hat {\bf U} - {\bf F}_1 ( \check{\bf U} ) - a_1 \check{\bf U} \big) \in {\mathcal G}, \quad \forall \lambda \in \big(0, 1/{(2a_1)}\big],
\end{equation}
holds for all admissible states $\hat{\bf U}, \check {\bf U}$ with $\hat B_1= \check B_1$.
On the contrary, for the special admissible states $\hat{\bf U},{\check {\bf U}}$ in \eqref{eq:contourex},
Remark \ref{rem:exa} yields that \eqref{eq:statebycheng} does not always hold when $\lambda $ is close to $\frac{1}{2a_1}$,
because
$
\mathop {\lim }\limits_{\lambda \to {1}/{(2 a_1)}} {\mathcal E} ( {\bf U}_\lambda )
={\mathcal E}( \overline {\bf U}) < 0 .
$
This deserves further explanation, as
the derivation of \eqref{eq:statebycheng} in \cite{cheng} is
not mathematically rigorous but based on two assumptions.
One assumption is very reasonable (but unproven), stating that
the exact solution ${\bf U}(x_1,t)$ to the 1D Riemann problem (RP)
\begin{equation}\label{eq:RP}
\begin{cases}
\frac{\partial {\bf U}}{\partial t} + \frac{\partial {\bf F}_1 ( {\bf U} )}{\partial x_1} = {\bf 0} ,\\
{\bf U}(x_1,0) = \begin{cases}
\hat {\bf U}, &x_1 < 0,\\
\check {\bf U}, &x_1 > 0,
\end{cases}
\end{cases}
\end{equation}
is always admissible if $\hat{\bf U},~\check{\bf U}\in {\mathcal G}$ with $\hat B_1= \check B_1$.
Another ``assumption'' (not mentioned but implicitly used in \cite{cheng}) is that $a_1=\| {\mathscr{R}}_1 ( {\bf U}(\cdot,0) )\|_{\infty}$ is an upper bound of the maximum wave speed in the above RP.
In fact, $a_1$ may not always be such a bound when the
fast shocks exist in the RP solution, as indicated in \cite{Guermond2016} for the gas dynamics system (with zero magnetic field).
Hence, the latter assumption may affect some 1D analysis in \cite{cheng},
see our finding in Theorem \ref{eq:1DnotPP}.
It is also worth emphasizing that the 1D analysis in \cite{cheng} could work in general if
$\| {\mathscr{R}}_1 ( {\bf U}(\cdot,0) )\|_{\infty}$ is replaced with a rigorous upper bound of the maximum wave speed in the RP.
\end{remark}
\begin{remark}
Proposition \ref{lem:fornewGLF} can also be used to
derive generalized LF splitting properties,
see Appendix \ref{sec:gLFnewproof} for
the 1D
case.
\end{remark}
We then present the multi-dimensional generalized LF splitting properties.
\begin{theorem}[2D generalized LF splitting]\label{theo:MHD:LLFsplit2D}
If $\bar{\bf U}^i$, $\tilde{\bf U}^{i}$, $\hat{\bf U}^i$,
$\check{\bf U}^i
\in {\mathcal G}$ for $i=1,\cdots,{\tt Q}$ satisfy the 2D ``discrete divergence-free'' condition
\begin{equation}\label{eq:descrite2DDIV}
\frac{{\sum\limits_{i=1}^{\tt Q} {{\omega _i}({\bar B_1}^i - {\tilde B_1}^i)} }}{{\Delta x}}
+ \frac{{\sum\limits_{i=1}^{\tt Q} {{\omega _i}({\hat B_2}^i - {\check B_2}^i)} }}{{\Delta y}}
=0,
\end{equation}
where $\Delta x,\Delta y >0$, and the sum of the positive numbers
$\left\{\omega _i \right\}_{i=1}^{\tt Q}$ equals one,
then for any $\alpha_{1}^{\tt LF}$ and $\alpha_{2}^{\tt LF}$ satisfying
$ \alpha_{1}^{\tt LF} > \max_{1\le i\le {\tt Q}} \alpha_1 ( \bar { \bf U }^i, \tilde{
\bf U}^i ) $, $\alpha_{2}^{\tt LF} > \max_{1\le i\le {\tt Q}}
\alpha_2 ( \hat { \bf U }^i , \check{\bf U}^i ),
$
it holds
\begin{equation} \label{eq:MHD:LLFsplit2D}
\begin{split}
\overline{\bf U}:=
\frac{1}{ 2\left( \frac{\alpha_{1}^{\tt LF} }{\Delta x} + \frac{\alpha_{2}^{\tt LF}}{\Delta y} \right)}
\sum\limits_{i=1}^{\tt Q} { \omega _i }
\bigg[
&\frac{\alpha_{1}^{\tt LF}}{\Delta x} \bigg(
\bar { \bf U }^i - \frac{ {\bf F}_1 ( \bar { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
+\tilde{
\bf U}^i + \frac{ {\bf F}_1 ( \tilde { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
\bigg)
\\%&\quad \quad \quad \quad \quad
+&
\frac{\alpha_{2}^{\tt LF}}{\Delta y} \bigg(
\hat { \bf U }^i - \frac { {\bf F}_2 ( \hat { \bf U}^i) } { \alpha_{2}^{\tt LF} }
+\check{
\bf U}^i + \frac { {\bf F}_2 ( \check { \bf U}^i) } { \alpha_{2}^{\tt LF} }
\bigg)
\bigg]
\in {\mathcal G}.
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
The first component of $\overline{\bf U}$ equals
\begin{align*}
&
\frac1{ 2\left( \frac{\alpha_{1}^{\tt LF} }{\Delta x} + \frac{\alpha_{2}^{\tt LF}}{\Delta y} \right)}
\sum\limits_{i=1}^{\tt Q} {{\omega _i}}
\bigg(
\frac{ \bar { \rho }^i ( \alpha_{1}^{\tt LF} - { {\bar v_1}^i } )
+\tilde{
\rho }^i ( \alpha_{1}^{\tt LF} + {{\tilde v_1}^i} ) }{\Delta x}
+
\frac{ \hat { \rho }^i ( \alpha_{2}^{\tt LF} - { { \hat v_2}^i } )
+\check{
\rho}^i ( \alpha_{2}^{\tt LF} + {\check v_2}^i) }{\Delta y}
\bigg),
\end{align*}
which is positive.
For any ${\bf v}^*,{\bf B}^* \in \mathbb{R}^3$, using Lemma \ref{theo:MHD:LLFsplit} and the condition
\eqref{eq:descrite2DDIV} gives
{\small \begin{align*}
& \bigg( \overline{\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg) \times 2\left( \frac{\alpha_{1}^{\tt LF} }{\Delta x} + \frac{\alpha_{2}^{\tt LF}}{\Delta y} \right)
\\
& \quad =
\sum\limits_{i=1}^{\tt Q} { \omega _i }
\Bigg\{
\frac{\alpha_{1}^{\tt LF}}{\Delta x} \bigg[\bigg(
\bar { \bf U }^i - \frac{ {\bf F}_1 ( \bar { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
+\tilde{
\bf U}^i + \frac{ {\bf F}_1 ( \tilde { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
\bigg) \cdot {\bf n}^* + |{\bf B}^*|^2 \bigg]
\\
& \quad \qquad \quad \ +
\frac{\alpha_{2}^{\tt LF}}{\Delta y} \bigg[ \bigg(
\hat { \bf U }^i - \frac { {\bf F}_2 ( \hat { \bf U}^i) } { \alpha_{2}^{\tt LF} }
+\check{
\bf U}^i + \frac { {\bf F}_2 ( \check { \bf U}^i) } { \alpha_{2}^{\tt LF} }
\bigg) \cdot {\bf n}^* + |{\bf B}^*|^2 \bigg]
\Bigg\}
\\[2mm]
& \quad \overset{\eqref{eq:MHD:LLFsplit}}{>}
\sum\limits_{i=1}^{\tt Q} { \omega _i }
\Bigg\{
\frac{\alpha_{1}^{\tt LF}}{\Delta x} \bigg[ - \frac{ \bar B_1^i - \tilde B_1^i }{ \alpha_{1}^{\tt LF} } ({\bf v}^* \cdot {\bf B}^*) \bigg]
+
\frac{\alpha_{2}^{\tt LF}}{\Delta y} \bigg[ - \frac{ \hat B_2^i - \check B_2^i }{ \alpha_{2}^{\tt LF} } ({\bf v}^* \cdot {\bf B}^*) \bigg]
\Bigg\}
\\[2mm]
& \quad = - ({\bf v}^* \cdot {\bf B}^*)
\sum\limits_{i=1}^{\tt Q} {{\omega _i}} \left( \frac{{{\bar B_1}^i - {\tilde B_1}^i} }{{\Delta x}} + \frac{ {\hat B_2}^i - {\check B_2}^i} {{\Delta y}} \right) \quad \overset{\eqref{eq:descrite2DDIV}}{=} 0. \nonumber
\end{align*}}
It follows that $\overline{\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}>0$. Thus $\overline{\bf U} \in {\mathcal G}_* = {\mathcal G}$.
\end{proof}
\begin{theorem}[3D generalized LF splitting]\label{theo:MHD:LLFsplit3D}
If $\bar{\bf U}^i$, $\tilde{\bf U}^{i}$, $\hat{\bf U}^i$,
$\check{\bf U}^i$, $\acute{\bf U}^i$, $\grave{\bf U}^i \in {\mathcal G}$
for $i=1,\cdots, {\tt Q}$,
and they satisfy the 3D ``discrete divergence-free'' condition
\begin{equation*}
\frac{{\sum\limits_{i=1}^{\tt Q} {{\omega _i}({\bar B_1}^i - {\tilde B_1}^i)} }}{{\Delta x}}
+ \frac{{\sum\limits_{i=1}^{\tt Q} {{\omega _i}({\hat B_2}^i - {\check B_2}^i)} }}{{\Delta y}}
+ \frac{{\sum\limits_{i=1}^{\tt Q} {{\omega _i}({\acute B_3}^i - {\grave B_3}^i)} }}{{\Delta { z}}}
=0,
\end{equation*}
with $\Delta x, \Delta y, \Delta { z} >0$, and the sum of the positive numbers
$\left\{\omega _i \right\}_{i=1}^{\tt Q}$ equals one,
then for any $\alpha_{1}^{\tt LF}$, $\alpha_{2}^{\tt LF}$ and $\alpha_{3}^{\tt LF}$ satisfying
$$ \alpha_{1}^{\tt LF} > \max_{1\le i\le {\tt Q}} \alpha_1 ( \bar { \bf U }^i, \tilde{
\bf U}^i ) ,\quad \alpha_{2}^{\tt LF} > \max_{1\le i\le {\tt Q}}
\alpha_2 ( \hat { \bf U }^i , \check{\bf U}^i ),\quad \alpha_{3}^{\tt LF} > \max_{1\le i\le {\tt Q}}
\alpha_3 ( \acute { \bf U }^i , \grave{\bf U}^i ),
$$
it holds $\overline{\bf U} \in {\mathcal G}$, where
\begin{equation*}
\begin{split}
\overline{\bf U} &:= \frac{1}{ 2\left( \frac{\alpha_{1}^{\tt LF} }{\Delta x} + \frac{\alpha_{2}^{\tt LF}}{\Delta y} + \frac{\alpha_{3}^{\tt LF}}{\Delta { z}} \right)}
\sum\limits_{i=1}^{\tt Q} { \omega _i }
\bigg[
\frac{\alpha_{1}^{\tt LF}}{\Delta x} \bigg(
\bar { \bf U }^i - \frac{ {\bf F}_1 ( \bar { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
+\tilde{
\bf U}^i + \frac{ {\bf F}_1 ( \tilde { \bf U}^i) }{ \alpha_{1}^{\tt LF} }
\bigg)
\\%&\quad \quad \quad \quad \quad
& +
\frac{\alpha_{2}^{\tt LF}}{\Delta y} \bigg(
\hat { \bf U }^i - \frac { {\bf F}_2 ( \hat { \bf U}^i) } { \alpha_{2}^{\tt LF} }
+\check{
\bf U}^i + \frac { {\bf F}_2 ( \check { \bf U}^i) } { \alpha_{2}^{\tt LF} }
\bigg)
+
\frac{\alpha_{3}^{\tt LF}}{\Delta { z}} \bigg(
\acute { \bf U }^i - \frac { {\bf F}_3 ( \acute { \bf U}^i) } { \alpha_{3}^{\tt LF} }
+\grave{
\bf U}^i + \frac { {\bf F}_3 ( \grave { \bf U}^i) } { \alpha_{3}^{\tt LF} }
\bigg)
\bigg].
\end{split}
\end{equation*}
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem \ref{theo:MHD:LLFsplit2D} and omitted here.
\end{proof}
\begin{remark}
In the above
generalized LF splitting properties,
the convex combination $\overline{\bf U}$ depends on
a number of strongly coupled states, making it extremely difficult to check the admissibility of $\overline{\bf U}$.
Such difficulty is subtly overcame by using the inequality \eqref{eq:MHD:LLFsplit} under the ``discrete divergence-free'' condition, which is
an approximation to \eqref{eq:2D:BxBy0}. For example,
the 2D ``discrete divergence-free'' condition \eqref{eq:descrite2DDIV} can be derived
by using some quadrature rule for the integrals at the left side of
\begin{equation} \label{eq:div000}
\begin{split}
& \frac{1}{{\Delta x}} \left( \frac{1}{{\Delta y}}\int_{{\tt y}_0 }^{{\tt y}_0 + \Delta y}
{ \big( B_1 ({\tt x}_0 + \Delta x,{\tt y}) - B_1 ({\tt x}_0 ,{\tt y}) \big) d{\tt y}} \right)
\\
&
+ \frac{1}{{\Delta y}} \left( \frac{1}{{\Delta x}}\int_{{\tt x}_0 }^{{\tt x}_0 + \Delta x}
{ \big( B_2 ({\tt x},{\tt y}_0 + \Delta y)-B_2 ({\tt x},{\tt y}_0 )\big) d{\tt x}} \right) \\
& = \frac{1}{{\Delta x\Delta y}}\int_{I} {\left( {\frac{{\partial B_1 }}{{\partial {\tt x}}} + \frac{{\partial B_2 }}{{\partial {\tt y}}}} \right)d{\tt x}d{\tt y}}=0,
\end{split}
\end{equation}
where $({\tt x},{\tt y})=(x_1,x_2)$ and $I = [{\tt x}_0 ,{\tt x}_0 + \Delta x] \times [{\tt y}_0 ,{\tt y}_0 + \Delta y] $.
It is worth emphasizing that,
like the necessity of the last term at the left-hand side of
\eqref{eq:MHD:LLFsplit},
the proposed DDF condition
is necessary and crucial for the generalized
LF splitting properties.
Without this condition, those properties do not hold in general, even if $\alpha_i$ is replaced with $\chi \alpha_i$ or
$\chi a_i$ for any constant $\chi \ge 1$, see the proof of Theorem \ref{theo:counterEx}.
\end{remark}
The above generalized LF splitting properties
are important tools in
analyzing PP schemes on uniform Cartesian meshes
if the numerical flux is taken as the LF flux
\begin{equation}\label{eq:LFflux}
\hat {\bf F}_\ell ( {\bf U}^- , {\bf U}^+ ) = \frac{1}{2} \Big( {\bf F}_\ell ( {\bf U}^- ) + {\bf F}_\ell ( {\bf U}^+ ) -
\alpha_{\ell ,n}^{\tt LF} ( {\bf U}^+ - {\bf U}^- ) \Big),\quad \ell=1,\cdots,d.
\end{equation}
Here $\{\alpha_{\ell,n}^{\tt LF}\}$ denote the numerical viscosity parameters specified at the $n$-th discretized time level.
The extension of the above results on non-uniform or unstructured meshes will be presented in
a separate paper.
\section{One-dimensional positivity-preserving schemes}\label{sec:1Dpcp}
This section applies the above theories to
study the provably PP schemes with the LF flux \eqref{eq:LFflux} for
the
system \eqref{eq:MHD} in one dimension.
In 1D, the divergence-free
condition \eqref{eq:2D:BxBy0} and the fifth equation in \eqref{eq:MHD} yield that $B_1(x_1,t)\equiv {\rm constant}$ (denoted by ${\tt B}_{\tt const}$) for all $x_1$ and $t \ge 0$.
To avoid confusing subscripts, we will use the symbol $\tt x$ to represent the variable $x_1$ in \eqref{eq:MHD}.
Assume that the spatial domain is divided into uniform cells $\{ I_j=({\tt x}_{j-\frac{1}{2}},{\tt x}_{j+\frac{1}{2}}) \}$,
with a constant spatial step-size $\Delta x$.
And the time interval is divided into the mesh $\{t_0=0, t_{n+1}=t_n+\Delta t_{n}, n\geq 0\}$
with the time step-size $\Delta t_{n}$ determined by some CFL condition.
Let
$\bar {\bf U}_j^n $ denote the numerical cell-averaged approximation of the exact solution ${\bf U}({\tt x},t)$ over $I_j$ at $t=t_n$.
Assume the discrete initial data $\bar {\bf U}_j^0\in {\mathcal G}$.
A scheme is defined to be PP if its numerical solution $\bar {\bf U}_j^n$ always stays at ${\mathcal G}$.
\subsection{First-order scheme}
The 1D first-order LF scheme reads
\begin{equation}\label{eq:1DMHD:LFscheme}
\bar {\bf U}_j^{n+1} = \bar {\bf U}_j^n - \frac{\Delta t_n}{\Delta { x}} \Big( \hat {\bf F}_1 ( \bar {\bf U}_j^n ,\bar {\bf U}_{j+1}^n) - \hat {\bf F}_1 ( \bar {\bf U}_{j-1}^n, \bar {\bf U}_j^n ) \Big) ,
\end{equation}
where the numerical flux $\hat {\bf F}_1 (\cdot,\cdot) $ is defined by \eqref{eq:LFflux}.
A surprising discovery is that
the LF scheme \eqref{eq:1DMHD:LFscheme} with
a standard parameter $\alpha_{1,n}^{\tt LF}= \max_j {\mathscr{R}}_1 ( \bar {\bf U}_{j}^n ) $ (although works well in most cases)
is not always PP regardless of how small the CFL number is. However, if the parameter $\alpha_{1,n}^{\tt LF}$ in \eqref{eq:LFflux} satisfies
\begin{equation}\label{eq:Lxa1}
\alpha_{1,n}^{\tt LF} > \max_{j} \alpha_1 ( \bar {\bf U}_{j+1}^n, \bar {\bf U}_{j-1}^n ),
\end{equation}
then we can rigorously prove that
the scheme \eqref{eq:1DMHD:LFscheme} is PP when the CFL number is less than one.
These results are shown the following two theorems.
We remark that the lower bound given in \eqref{eq:Lxa1} is acceptable in comparison with the standard parameter $\max_{j} {\mathscr{R}}_1 ({\bf U}_{j}^n)$, because
one can derive from Proposition \ref{prop:alpha} that
$$
\max_{j} \alpha_1 ( \bar {\bf U}_{j+1}^n, \bar {\bf U}_{j-1}^n )
< 2 \max_{j} {\mathscr{R}}_1 ({\bf U}_{j}^n),
$$
and for smooth problems, $
\max_{j} \alpha_1 ( \bar {\bf U}_{j+1}^n, \bar {\bf U}_{j-1}^n ) < \max_{j} {\mathscr{R}}_1 ({\bf U}_{j}^n)
+ {\mathcal O}( \Delta x ).
$
\begin{theorem}\label{eq:1DnotPP}
Assume that $\bar {\bf U}_j^0 \in{\mathcal G}$ and $\bar B_{1,j}^0 = {\tt B}_{\tt const}$ for all $j$.
Let the parameter $\alpha_{1,n}^{\tt LF}=\max_j {\mathscr{R}}_1 ( \bar {\bf U}_{j}^n ) $, and
$$\Delta t_n ={\tt C} \frac{ \Delta x}{ \alpha_{1,n}^{\tt LF} }, $$
where ${\tt C} $ is the CFL number. For any constant ${\tt C} >0$,
the scheme \eqref{eq:1DMHD:LFscheme} is not PP.
\end{theorem}
\begin{proof}
We prove it by contradiction.
Assume that there exists a CFL number ${\tt C}>0$, such that the scheme \eqref{eq:1DMHD:LFscheme} is PP.
We consider the ideal gases with $\gamma=1.4$, and the following (admissible) data
\begin{equation}\label{eq:exampleUUU}
\bar{\bf U}_k^n =
\begin{cases}
\big( \frac{8}{25},~0,~-\frac{8}{25},~0,~{\tt B}_{\tt const},~10,~0,~\frac{2504}{25} + \frac{5 {\tt p}}{2} \big)^\top, & k\le j-1,
\\[2mm]
\big( \frac12,~\frac32,~-2,~0,~{\tt B}_{\tt const},~8,~0,~\frac{353}{4}
+ \frac{5 {\tt p}}{2} \big)^\top, & k=j,
\\[2mm]
\big( \frac15,~0,~\frac15,~0,~{\tt B}_{\tt const},~5,~0,~\frac{313}{5} + \frac{5 {\tt p}}{2} \big)^\top, & k \ge j+1,
\end{cases}
\end{equation}
where
${\tt B}_{\tt const}=10$ and ${\tt p}>0$.
For any ${\tt p} \in \big(0,\frac1{800}\big)$, we have
$$
\alpha_{1,n}^{\tt LF} = \max_k {\mathscr{R}}_1 ( \bar {\bf U}_{k}^n )
= \frac{\sqrt{5}}{4} \Big( 7{\tt p} + 10^3 + \sqrt{ 49 {\tt p}^2 + 10^6 } \Big)^{\frac12},
$$
and the state $\bar {\bf U}^{n+1}_j$ computed by \eqref{eq:1DMHD:LFscheme} depends on $\tt p$, specifically,
\begin{equation*}
\begin{aligned}
\bar {\bf U}^{n+1}_j & = \Bigg(
\frac12 - \frac{6 {\tt C}}{25},~
\frac{ 3 ( 1 - {\tt C} ) }{2} + \frac{ 75 {\tt C} }{ 4 \alpha_{1,n}^{\tt LF} },~
{\tt C} \Big( \frac{97}{50} - \frac{25} { \alpha_{1,n}^{\tt LF} } \Big)-2,~0,~10,\\
& \qquad
8+{\tt C} \Big( \frac{ 10 }{ \alpha_{1,n}^{\tt LF} } - \frac12 \Big),~0,~
\frac{ 5 {\tt p} }{2} + \frac{ 353 }{4}
- \frac{ 687 {\tt C} }{100} +
\frac{75 {\tt C}}{ \alpha_{1,n}^{\tt LF} }
\Bigg)^\top =: {\bf U} ({\tt p}).
\end{aligned}
\end{equation*}
By assumption, we have
${\bf U} ({\tt p}) \in {\mathcal G}$.
This yields $0<{\tt C}<\frac{25}{12}$,
and ${\mathcal E} ( {\bf U} ({\tt p}) )>0$ for any ${\tt p} \in (0,\frac{1}{800})$.
The continuity of ${\mathcal E} ( {\bf U})$ with respect to $\bf U$ on $\mathbb{R}^+\times \mathbb{R}^7$
implies that
$$
0 \le
\mathop {\lim }\limits_{{\tt p} \to 0^+ } {\mathcal E} ( {\bf U} ({\tt p}) ) = {\mathcal E} \Big( \mathop {\lim }\limits_{{\tt p} \to 0^+ } {\bf U} ({\tt p}) \Big)
= \frac{3 {\tt C} }{400} \times \frac{ 8 { \tt C }^2 + 75 { \tt C} - 200 }{ 25 - 12 {\tt C} } < 0,
$$
which is a contradiction. Thus the assumption is incorrect, and the proof is completed.
\end{proof}
\begin{theorem}\label{theo:1DMHD:LFscheme}
Assume that $\bar {\bf U}_j^0 \in{\mathcal G}$ and $\bar B_{1,j}^0 = {\tt B}_{\tt const}$ for all $j$, and
the parameter $\alpha_{1,n}^{\tt LF}$ satisfies \eqref{eq:Lxa1}.
Then the state $\bar {\bf U}_j^n$, computed by the scheme \eqref{eq:1DMHD:LFscheme} under the CFL condition
\begin{equation}\label{eq:CFL:LF}
0< \alpha_{1,n}^{\tt LF} \Delta t_n / \Delta x \le 1,
\end{equation}
belongs to ${\mathcal G}$ and satisfies $\bar B_{1,j}^n = {\tt B}_{\tt const} $ for all $j$
and $n\in {\mathbb{N}}$.
\end{theorem}
\begin{proof}
Here the induction argument is used for the time level number $n$.
It is obvious that the conclusion holds for $n=0$ under the hypothesis on the initial data.
We now assume that $\bar {\bf U}_j^n\in {\mathcal G}$ with $\bar B_{1,j}^n = {\tt B}_{\tt const} $ for all $j$,
and check whether the conclusion holds for $n+1$.
For the numerical flux in \eqref{eq:LFflux}, the fifth equation in \eqref{eq:1DMHD:LFscheme} gives
\begin{equation*}
\bar B_{1,j}^{n+1} = \bar B_{1,j}^{n} - \frac{\lambda}{2} \big( 2 \bar B_{1,j}^{n} - \bar B_{1,j+1}^{n} - \bar B_{1,j-1}^{n} \big)
= {\tt B}_{\tt const},
\end{equation*}
for all $j$, where $\lambda = \alpha_{1,n}^{\tt LF} \Delta t_n / \Delta x \in (0,1]$ due to \eqref{eq:CFL:LF}.
We rewrite the scheme \eqref{eq:1DMHD:LFscheme} as
\begin{equation*}
\bar {\bf U}_j^{n+1} = (1-\lambda) \bar {\bf U}_j^n + \lambda {\bf \Xi},
\end{equation*}
with
$$
{\bf \Xi}:=\frac{1}{2 }\bigg( \bar {\bf U}_{j+1}^n - \frac{ {\bf F}_1( \bar {\bf U}_{j+1}^n) }{ \alpha_{1,n}^{\tt LF} } +
\bar {\bf U}_{j-1}^n + \frac{ {\bf F}_1 ( \bar {\bf U}_{j-1}^n )} {\alpha_{1,n}^{\tt LF}}\bigg).
$$
Under the induction hypothesis $\bar {\bf U}_{j-1}^n,\bar {\bf U}_{j+1}^n \in {\mathcal G}$ and $\bar B_{1,j-1}^{n}=\bar B_{1,j+1}^{n}$,
we conclude that ${\bf \Xi} \in {\mathcal G}$ by the generalized LF splitting property in Theorem \ref{theo:MHD:LLFsplit1D}.
The convexity of $\mathcal G$ further yields $\bar {\bf U}_j^{n+1} \in {\mathcal G}$.
The proof is completed.
\end{proof}
\begin{remark}
If the condition \eqref{eq:CFL:LF} is enhanced to $0< \alpha_{1,n}^{\tt LF} \Delta t_n / \Delta x < 1$, then Theorem \ref{theo:1DMHD:LFscheme} holds for all $\alpha_{1,n}^{\tt LF} \ge \max_{j} \alpha_1 ( \bar {\bf U}_{j+1}^n, \bar {\bf U}_{j-1}^n )$, by Lemma \ref{theo:MHD:convex}.
It is similar for the following Theorems \ref{thm:PP:1DMHD}, \ref{theo:2DMHD:LFscheme},
\ref{theo:FullPP:LFscheme}, \ref{thm:PP:2DMHD}, and will not be repeated.
\end{remark}
\subsection{High-order schemes}\label{sec:High1D}
We now study the provably PP high-order schemes for 1D MHD equations \eqref{eq:MHD}. With the provenly PP LF scheme \eqref{eq:1DMHD:LFscheme} as
building block, any high-order finite difference schemes can be
modified to be PP by a limiter \cite{Christlieb}.
The following PP analysis is focused on finite volume and DG schemes. The considered 1D DG schemes are similar to those
in \cite{cheng} but with a different viscosity parameter
in the LF flux so that the PP property can be rigorously proved in our case.
For the moment, we use the forward Euler method for time discretization, while high-order time discretization will be discussed later.
We consider the high-order finite volume schemes as well as the scheme satisfied by
the cell averages of a discontinuous Galerkin (DG) method,
which have the following form
\begin{equation}\label{eq:1DMHD:cellaverage}
\bar {\bf U}_j^{n+1} = \bar {\bf U}_j^{n} - \frac{\Delta t_n}{\Delta x}
\Big( \hat {\bf F}_1 ( {\bf U}_{j+ \frac{1}{2}}^-, {\bf U}_{j+ \frac{1}{2}}^+ )
- \hat {\bf F}_1 ( {\bf U}_{j- \frac{1}{2}}^-, {\bf U}_{j-\frac{1}{2}}^+)
\Big) ,
\end{equation}
where $\hat {\bf F}_1 ( \cdot,\cdot )$ is taken the LF flux defined in \eqref{eq:LFflux}.
The quantities ${\bf U}_{j + \frac{1}{2}}^-$ and ${\bf U}_{j + \frac{1}{2}}^+$ are the high-order approximations
of the point values ${\bf U}\big( {\tt x}_{j + \frac{1}{2}} ,t_n \big)$ within the cells $I_j$ and $I_{j+1}$, respectively,
computed by
\begin{equation}\label{eq:DG1Dvalues}
{\bf U}_{j + \frac{1}{2}}^- = {\bf U}_j^n \big( {\tt x}_{j + \frac{1}{2}}-0 \big), \quad {\bf U}_{j + \frac{1}{2}}^+ = {\bf U}_{j+1}^n \big( {\tt x}_{j + \frac{1}{2}}+0 \big),
\end{equation}
where the polynomial function ${\bf U}_j^n({\tt x})$ is with the cell-averaged value of $\bar {\bf U}_j^n$,
approximates ${\bf U}( {\tt x},t_n)$ within the cell $I_j$, and is either reconstructed in the finite volume methods from $\{\bar {\bf U}_j^n\}$ or directly evolved in the DG methods
with degree ${\tt K} \ge 1$.
The evolution equations for the high-order ``moments'' of ${\bf U}_j^n({\tt x})$ in the DG methods are omitted because we are only concerned with the PP property of the schemes here.
Generally the high-order scheme \eqref{eq:1DMHD:cellaverage} is not PP.
As proved in the following theorem,
the scheme \eqref{eq:1DMHD:cellaverage} becomes PP if
${\bf U}_{j + \frac{1}{2}}^\pm$ are computed by \eqref{eq:DG1Dvalues} with ${\bf U}^n_j({\tt x})$ satisfying
\begin{align}\label{eq:1DDG:con1}
&
B_{1,j+\frac12}^{\pm} = {\tt B}_{\tt const} ,\quad \forall j,
\\ \label{eq:1DDG:con2}
& {\bf U}_j^n ( \hat {\tt x}_j^{(\mu)} ) \in {\mathcal G}, \quad \forall \mu \in \{1,2,\cdots,{\tt L}\}, ~\forall j,
\end{align}
and $\alpha_{1,n}^{\tt LF}$ satisfies \eqref{eq:LxaH}.
Here $\{ \hat {\tt x}_j^{(\mu)} \}_{\mu=1}^{ {\tt L}}$ are the {\tt L}-point Gauss-Lobatto quadrature nodes in the interval $I_j$,
whose associated quadrature weights are denoted by $\{\hat \omega_\mu\}_{\mu=1} ^{\tt L}$ with $\sum_{\mu=1}^{\tt L} \hat\omega_\mu = 1$.
We require $2{\tt L}-3\ge {\tt K}$ such that the algebraic precision of corresponding quadrature is at least $\tt K$, e.g., taking $\tt L$ as the integral part of $\frac{{\tt K}+3}{2}$.
\begin{theorem} \label{thm:PP:1DMHD}
If the polynomial vectors $\{{\bf U}^n_j({\tt x})\}$ satisfy \eqref{eq:1DDG:con1}--\eqref{eq:1DDG:con2}, and the parameter $\alpha_{1,n}^{\tt LF}$
in \eqref{eq:LFflux} satisfies
\begin{equation}\label{eq:LxaH}
\alpha_{1,n}^{\tt LF} > \max_{j} \alpha_1 ( {\bf U}_{j+\frac12}^\pm, {\bf U}_{j-\frac12}^\pm ),
\end{equation}
then the high-order scheme \eqref{eq:1DMHD:cellaverage} is PP under the CFL condition
\begin{equation}\label{eq:CFL:1DMHD}
0< \alpha_{1,n}^{\tt LF} \Delta t_n/ \Delta x \le \hat \omega_1.
\end{equation}
\end{theorem}
\begin{proof}
The exactness of the $\tt L$-point Gauss-Lobatto quadrature rule for the polynomials of degree $\tt K$ yields
$$
\bar{\bf U}_j^n = \frac{1}{\Delta x} \int_{I_j} {\bf U}_j^n ({\tt x}) d{\tt x} = \sum \limits_{\mu=1}^{\tt L} \hat \omega_\mu {\bf U}_j^n (\hat {\tt x}_j^{(\mu)} ).
$$
Noting $\hat \omega_1 = \hat \omega_{\tt L}$ and $\hat{\tt x}_j^{1,{\tt L}}={\tt x}_{j\mp\frac12}$, we can then rewrite the scheme \eqref{eq:1DMHD:cellaverage} into the convex combination form
\begin{align} \label{eq:1DMHD:convexsplit}
\bar{\bf U}_j^{n+1} =
\sum \limits_{\mu=2}^{{\tt L}-1} \hat \omega_\mu {\bf U}_j^n ( \hat {\tt x}_j^{(\mu)} )
+ (\hat \omega_1-\lambda) \left( {\bf U}_{j-\frac{1}{2}}^+ + {\bf U}_{j+\frac{1}{2}}^- \right)
+ \lambda {\bf \Xi}_- + \lambda {\bf \Xi}_+,
\end{align}
where $\lambda = \alpha_{1,n}^{\tt LF} \Delta t_n/\Delta x \in (0,\hat \omega_1 ]$, and
\begin{align*}
& {\bf \Xi}_\pm = \frac{1}{2} \left( {\bf U}_{j+\frac{1}{2}}^\pm - \frac{ {\bf F}_1 ( {\bf U}_{j+\frac{1}{2}}^\pm ) } { \alpha_{1,n}^{\tt LF} }
+ {\bf U}_{j-\frac{1}{2}}^\pm + \frac{ {\bf F}_1 ( {\bf U}_{j-\frac{1}{2}}^\pm ) } { \alpha_{1,n}^{\tt LF} } \right).
\end{align*}
The condition \eqref{eq:1DDG:con1} and \eqref{eq:LxaH} yield ${\bf \Xi}_\pm \in {\mathcal G}$
by the generalized LF splitting property in Theorem \ref{theo:MHD:LLFsplit1D}.
We therefore have $\bar{\bf U}_j^{n+1} \in {\mathcal G} $ from \eqref{eq:1DMHD:convexsplit}
by the convexity of $\mathcal G$.
\end{proof}
\begin{remark}
The condition \eqref{eq:1DDG:con1} is easily ensured in practice, since the exact solution $B_1(x_1,t)\equiv {\tt B}_{\tt const}$ and
the flux for $B_1$ is zero. While the condition \eqref{eq:1DDG:con2} can be enforced by a simple scaling limiting procedure,
which was well designed in \cite{cheng} by extending the
techniques in \cite{zhang2010,zhang2010b}.
The details of the procedure are omitted here.
\end{remark}
The above analysis is focused on first-order time discretization.
Actually it is also valid for the high-order explicit time discretization
using strong stability preserving (SSP) methods \cite{Gottlieb2001,Gottlieb2005,Gottlieb2009}.
This is because of the convexity of
$\mathcal G$, as well as the fact that a SSP method is certain convex combination of the forward Euler method.
\section{Two-dimensional positivity-preserving schemes}\label{sec:2Dpcp}
This section discusses positivity-preserving (PP) schemes for the MHD system \eqref{eq:MHD} in two dimensions ($d=2$).
The extension of our analysis to 3D case ($d=3$) is straightforward and displayed in Appendix \ref{sec:3D}.
Our analysis will reveal
that the PP property of conservative multi-dimensional MHD schemes is strongly connected with
a discrete divergence-free condition on the numerical magnetic field.
For convenience,
the symbols $({\tt x},{\tt y})$ are used to denote the variables $(x_1,x_2)$ in \eqref{eq:MHD}.
Assume that the 2D spatial domain is divided into a uniform rectangular mesh with cells $\big\{I_{ij}=({\tt x}_{i-\frac{1}{2}},{\tt x}_{i+\frac{1}{2}})\times
({\tt y}_{j-\frac{1}{2}},{\tt y}_{j+\frac{1}{2}}) \big\}$.
The spatial step-sizes in ${\tt x},{\tt y}$ directions are denoted by
$\Delta x,\Delta y$ respectively. The time interval is also divided into the mesh $\{t_0=0, t_{n+1}=t_n+\Delta t_{n}, n\geq 0\}$
with the time step size $\Delta t_{n}$ determined by the CFL condition. We use $\bar {\bf U}_{ij}^n $
to denote the numerical approximation to the cell-averaged value of the exact solution over $I_{ij}$
at time $t_n$.
We aim at seeking numerical schemes whose solution
$\bar {\bf U}_{ij}^n$
is preserved in $\mathcal G$.
\subsection{First-order scheme} \label{sec:2D:FirstOrder}
The 2D first-order LF scheme reads
\begin{equation} \label{eq:2DMHD:LFscheme}
\begin{split}
\bar {\bf U}_{ij}^{n+1} = \bar {\bf U}_{ij}^n - \frac{\Delta t_n}{\Delta x} \Big( \hat {\bf F}_{1,i+\frac12,j} - \hat {\bf F}_{1,i-\frac12,j}\Big)
- \frac{\Delta t_n}{\Delta y} \Big( \hat {\bf F}_{2,i,j+\frac12} - \hat {\bf F}_{2,i,j-\frac12} \Big),
\end{split}
\end{equation}
where
$\hat {\bf F}_{1,i+\frac12,j}= \hat {\bf F}_1 ( \bar {\bf U}_{ij}^n ,\bar {\bf U}_{i+1,j}^n)$,
$\hat {\bf F}_{2,i,j+\frac12} = \hat {\bf F}_2 ( \bar {\bf U}_{ij}^n ,\bar {\bf U}_{i,j+1}^n) $, and $\hat {\bf F}_\ell (\cdot,\cdot), \ell=1,2,$ are the LF fluxes in \eqref{eq:LFflux}.
As mentioned in \cite{Christlieb}, there was still no rigorous proof that the LF
scheme \eqref{eq:2DMHD:LFscheme} or any other first-order scheme is PP in the multi-dimensional cases.
For the ideal MHD with the EOS \eqref{eq:EOS},
it seems natural to conjecture \cite{cheng} that
\begin{equation}\label{eq:conjectureCheng}
\mbox{
given $\bar {\bf U}_{ij}^n \in {\mathcal G}~~\forall i,j$, then $\bar {\bf U}_{ij}^{n+1}$ computed from \eqref{eq:2DMHD:LFscheme} always
belongs to $\mathcal G$,
}
\end{equation}
under certain CFL condition (e.g., the CFL number is less than 0.5).
If \eqref{eq:conjectureCheng} holds true, it would be important and very useful for developing PP high-order schemes \cite{cheng,Christlieb2016,Christlieb} for \eqref{eq:MHD}.
Unfortunately, the following theorem shows that
\eqref{eq:conjectureCheng} does not always
hold,
no matter
how small the specified CFL number is,
and even if
the parameter $\alpha_{\ell,n}^{\tt LF} $ is taken as $\chi \max_{ij}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$
with any given constant $\chi \ge 1$.
(Note that
increasing numerical viscosity can usually enhance
the robustness of a LF scheme and increase the possibility
of achieving PP property, and
$\alpha_{\ell,n}^{\tt LF} = \chi \max_{ij}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$ corresponds
to the $\chi$ times larger numerical viscosity in comparison
with the standard one.)
\begin{theorem}\label{theo:counterEx}
Let $\alpha_{\ell,n}^{\tt LF} = \chi \max_{ij}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$ with the
constant $\chi \ge 1$, and
$$\Delta t_n = \frac{ {\tt C} }{ \alpha_{1,n}^{\tt LF} /\Delta x + \alpha_{2,n}^{\tt LF} / \Delta y }, $$
where ${\tt C}>0$ is the CFL number.
For any given constants $\chi$ and $\tt C$,
there always exists a set of admissible states
$\{ \bar {\bf U}_{ij}^{n},\forall i,j\}$
such that the solution $\bar {\bf U}_{ij}^{n+1}$
of \eqref{eq:2DMHD:LFscheme} does not belong to $\mathcal G$.
In other words, for any given $\chi$ and $\tt C$,
the admissibility of $\{ \bar {\bf U}_{ij}^{n}, \forall i,j \}$ does not always guarantee that $\bar {\bf U}_{ij}^{n+1} \in {\mathcal G}$, $\forall i,j$.
\end{theorem}
\begin{proof}
We prove it by contradiction.
Assume that there exists
a constant $\chi\ge 1$ and a CFL number ${\tt C}>0$,
such that $\bar {\bf U}_{ij}^n \in {\mathcal G},~\forall i,j$
always ensure $\bar {\bf U}_{ij}^{n+1} \in {\mathcal G},~\forall i,j$.
Consider the ideal gases and a special set of admissible states
\begin{equation}\label{eq:counterEx2D}
{\small
\bar{\bf U}_{k,m}^n =
\begin{cases}
\left(1,~1,~0,~0,~1,~0,~0,~
\frac{\tt p}{\gamma-1}+1\right)^\top, \qquad (k,m)=(i-1,j),
\\[2mm]
\left(1,~1,~0,~0,~1+ \epsilon,~0,~0,~\frac{\tt p}{\gamma-1}+\frac{1+(1+\epsilon)^2}2 \right)^\top, \qquad (k,m)=(i+1,j),
\\[2mm]
\left(1, \frac{4 \chi + \epsilon }{4 \chi},~0,~0,~1+\frac{\epsilon}{2},~0,~0,~
\frac{\tt p}{\gamma-1} + \frac{ (4 \chi + \epsilon)^2 }{32 \chi^2}
+ \frac{(\epsilon+2)^2}{8}
\right)^\top,~{\rm otherwise},
\end{cases}}
\end{equation}
where ${\tt p} >0$ and $\epsilon >0$. For $\forall {\tt p} \in \big(0,\frac{1}{\gamma}\big)$ and $\forall \epsilon \in \big(0,\frac{1}{\chi}\big)$, we have
$
\alpha_{1,n}^{\tt LF}
=
= \chi (2+\epsilon)$,
$\alpha_{2,n}^{\tt LF}
= \chi \sqrt{\gamma {\tt p} + (1+\epsilon)^2}.
$
Hence
$
\lambda_1 ({\tt p},\epsilon) := \alpha_{1,n}^{\tt LF} \frac{\Delta t_n}{\Delta x}
= \frac{ {\tt C } \Delta y ( 2 + \epsilon) }{ \Delta y (2 + \epsilon) + \Delta x \sqrt{\gamma {\tt p} + (1+\epsilon)^2} }.
$
Substituting \eqref{eq:counterEx2D} into \eqref{eq:2DMHD:LFscheme} gives
\begin{equation*}
\begin{aligned}
\bar {\bf U}_{ij}^{n+1} &= \bigg(
1, \frac{4 \chi + \epsilon }{4 \chi},~0,~0,~1+\frac{\epsilon}{2},~0,~0,~
\lambda_1 ({\tt p},\epsilon) \times \frac{3+(1+\epsilon)^2}{4}
\\
& \qquad
+ \Big(1- \lambda_1 ({\tt p},\epsilon) \Big)
\Big( \frac{ (4 \chi + \epsilon)^2 }{32 \chi^2}
+ \frac{(\epsilon+2)^2}{8} \Big) + \frac{\tt p}{\gamma-1}
\bigg)^\top =: {\bf U} ({\tt p},\epsilon) .
\end{aligned}
\end{equation*}
By assumption we have
${\bf U} ({\tt p},\epsilon) \in {\mathcal G}$, and ${\mathcal E} ( {\bf U} ({\tt p},\epsilon) )>0$, for any $ {\tt p} \in \big(0,\frac{1}{\gamma}\big),~ \epsilon \in \big(0,\frac{1}{\chi}\big)$.
The continuity of ${\mathcal E} ( {\bf U})$ with respect to $\bf U$ on $\mathbb{R}^+\times \mathbb{R}^7$ further implies that
$$
0 \le
\mathop {\lim }\limits_{{\tt p} \to 0^+ } {\mathcal E} ( {\bf U} ({\tt p},\epsilon) ) = {\mathcal E} \Big( \mathop {\lim }\limits_{{\tt p} \to 0^+ } {\bf U} ({\tt p},\epsilon) \Big)
= - \frac{ \epsilon {\tt C} (2+\epsilon) ( 8 \chi + \epsilon - 4 \epsilon \chi^2 ) }{32 \chi^2 \big( 2+ \epsilon + (1+\epsilon) {\Delta x}/{\Delta y} \big)} < 0,
$$
which is a contradiction. Thus the assumption is incorrect, and the proof is completed.
\end{proof}
\begin{remark}\label{rem:1DdisBnotPCP}
The proof of Theorem \ref{theo:counterEx} also implies that, for any specified CFL number,
the 1D LF scheme \eqref{eq:1DMHD:LFscheme} is not always PP when $B_1$ is piecewise
constant.
\end{remark}
Inspired by Theorem \ref{theo:counterEx}, we conjecture that,
to fully ensure the admissibility of $\bar {\bf U}_{ij}^{n+1}$,
additional condition is required for the states $\{\bar {\bf U}_{i,j}^n, \bar {\bf U}_{i\pm1,j}^n, \bar {\bf U}_{i,j\pm1}^n\}$ except for their admissibility.
Such additional necessary condition
should be a divergence-free condition in discrete sense for
$\{\bar{\bf B}_{ij}^n\}$,
whose importance for robust simulations has been widely realized.
The following analysis confirms that a discrete divergence-free (DDF) condition does
play an important role in achieving the PP property.
If the states $\{\bar {\bf U}_{i,j}^n\}$ are all admissible and
satisfy the following DDF condition
\begin{equation}\label{eq:DisDivB}
\mbox{\rm div} _{ij} \bar {\bf B}^n := \frac{ \left( \bar B_1\right)_{i+1,j}^n - \left( \bar B_1 \right)_{i-1,j}^n } {2\Delta x} + \frac{ \left( \bar B_2 \right)_{i,j+1}^n - \left( \bar B_2 \right)_{i,j-1}^n } {2\Delta y} = 0,
\end{equation}
then we can rigorously prove that the scheme \eqref{eq:2DMHD:LFscheme} preserves $\bar {\bf U}_{ij}^{n+1}\in {\mathcal G}$, by using the generalized LF splitting property in Theorem \ref{theo:MHD:LLFsplit2D}.
\begin{theorem} \label{theo:2DMHD:LFscheme}
If for all $i$ and $j$, $\bar {\bf U}_{ij}^n \in {\mathcal G}$ and satisfies the DDF condition \eqref{eq:DisDivB},
then the solution $ \bar {\bf U}_{ij}^{n+1}$ of \eqref{eq:2DMHD:LFscheme} always belongs to ${\mathcal G}$ under the CFL condition
\begin{equation}\label{eq:CFL:LF2D}
0< \frac{ \alpha_{1,n}^{\tt LF} \Delta t_n}{\Delta x} + \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n}{\Delta y} \le 1,
\end{equation}
where the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\begin{equation}\label{eq:Lxa12}
\alpha_{1,n}^{\tt LF} > \max_{i,j} \alpha_1 ( \bar {\bf U}_{i+1,j}^n, \bar {\bf U}_{i-1,j}^n ),\quad
\alpha_{2,n}^{\tt LF} > \max_{i,j} \alpha_2 ( \bar {\bf U}_{i,j+1}^n, \bar {\bf U}_{i,j-1}^n ).
\end{equation}
\end{theorem}
\begin{proof}
Substituting \eqref{eq:LFflux} into \eqref{eq:2DMHD:LFscheme} gives
\begin{equation*}
\bar {\bf U}_{ij}^{n+1} = \lambda {\bf \Xi} + (1-\lambda) \bar {\bf U}_{ij}^n ,
\end{equation*}
where $\lambda := \Delta t_n \left( \frac{\alpha_{1,n}^{\tt LF}}{\Delta x} + \frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \right) \in (0,1]$ by \eqref{eq:CFL:LF2D}, and
\begin{equation*}
\begin{split}
{\bf \Xi} & :=
\frac{1}{ 2\left( \frac{\alpha_{1,n}^{\tt LF}}{\Delta x} + \frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \right) }
\Bigg[
\frac{\alpha_{1,n}^{\tt LF}}{\Delta x}
\left( \bar {\bf U}_{i+1,j}^n - \frac{ {\bf F}_1( \bar {\bf U}_{i+1,j}^n)}{ \alpha_{1,n}^{\tt LF} } +
\bar {\bf U}_{i-1,j}^n + \frac{ {\bf F}_1( \bar {\bf U}_{i-1,j}^n) }{ \alpha_{1,n}^{\tt LF} } \right) \\
& \quad + \frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \left( \bar {\bf U}_{i,j+1}^n - \frac{ {\bf F}_2( \bar {\bf U}_{i,j+1}^n)}{ \alpha_{2,n}^{\tt LF} } +
\bar {\bf U}_{i,j-1}^n + \frac{ {\bf F}_2( \bar {\bf U}_{i,j-1}^n) }{ \alpha_{2,n}^{\tt LF} } \right) \Bigg].
\end{split}
\end{equation*}
Using the condition \eqref{eq:DisDivB} and
Theorem \ref{theo:MHD:LLFsplit2D} gives ${\bf \Xi} \in {\mathcal G}$.
The convexity of $\mathcal G$ further yields $\bar {\bf U}_{ij}^{n+1} \in {\mathcal G}$. The proof is completed.
\end{proof}
\begin{remark}
The data in \eqref{eq:counterEx2D} satisfies
$
{\rm div}_{ij} \bar {\bf B}^{n} = \frac{\epsilon}{2\Delta x} > 0,
$
which can be very small when $0<\epsilon \ll 1$.
Therefore, from the proof of Theorem \ref{theo:counterEx},
we conclude that
violating the condition \eqref{eq:DisDivB} slightly could lead to
inadmissible solution of the scheme \eqref{eq:2DMHD:LFscheme},
if the pressure is sufficiently low. This demonstrates the importance
of \eqref{eq:DisDivB}.
\end{remark}
We now discuss whether the LF scheme \eqref{eq:2DMHD:LFscheme} preserves the DDF condition \eqref{eq:DisDivB}.
\begin{theorem} \label{theo:2DDivB:LFscheme}
For the LF scheme \eqref{eq:2DMHD:LFscheme},
the divergence error
$$ \varepsilon_{\infty}^n := \max_{ij} \left| {\rm div}_{ij} \bar {\bf B}^{n} \right| ,$$
does not grow with $n$
under the condition \eqref{eq:CFL:LF2D}.
Moreover,
$\{\bar {\bf U}_{ij}^n\}$
satisfy \eqref{eq:DisDivB}
for all $i,j$ and $n \in \mathbb{N}$, if \eqref{eq:DisDivB} holds for the discrete initial data $\{\bar {\bf U}_{ij}^0\}$.
\end{theorem}
\begin{proof}
Using the linearity of the operator $\mbox{div}_{ij}$, one can deduce from \eqref{eq:2DMHD:LFscheme} that
\begin{equation*}
\begin{split}
\mbox{div}_{ij} \bar {\bf B}^{n+1}
=& ( 1 - \lambda ) \mbox{div}_{ij} \bar {\bf B}^{n}
+ \frac{ \lambda_1 }{2} (
\mbox{div}_{i+1,j} \bar {\bf B}^{n} + \mbox{div}_{i-1,j} \bar {\bf B}^{n}
)
\\
&
+ \frac{\lambda_2 }{2} (
\mbox{div}_{i,j+1} \bar {\bf B}^{n} + \mbox{div}_{i,j-1} \bar {\bf B}^{n}
),
\end{split}
\end{equation*}
where $\lambda_1 = \frac{ \alpha_{1,n}^{\tt LF} \Delta t_n } { \Delta x}, \lambda_2 = \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n }{\Delta y}, \lambda = \lambda_1+ \lambda_2 \in (0,1] $.
It follows that
\begin{equation}\label{eq:notincease}
\varepsilon_{\infty}^{n+1} \le ( 1 - \lambda ) \varepsilon_{\infty}^{n} + \lambda_1 \varepsilon_{\infty}^{n} + \lambda_2 \varepsilon_{\infty}^{n} = \varepsilon_{\infty}^{n}.
\end{equation}
This means $\varepsilon_{\infty}^{n} $
does not grow with $n$. If $\varepsilon_{\infty}^{0} =0 $ for the discrete initial data $\{\bar {\bf U}_{ij}^0\}$, then $\varepsilon_{\infty}^{n} =0$ by \eqref{eq:notincease}, i.e., the condition \eqref{eq:DisDivB} is satisfied for all $i,j$ and $n \in \mathbb{N}$.
\end{proof}
Finally, we obtain the first provably PP scheme for the 2D MHD system \eqref{eq:MHD}, as stated in the following theorem.
\begin{theorem} \label{theo:FullPP:LFscheme}
Assume that the discrete initial data $\{\bar {\bf U}_{ij}^0\}$ are admissible and satisfy \eqref{eq:DisDivB}, which can be met by, e.g., the following second-order approximation
\begin{align*}
&\Big( \bar \rho_{ij}^0, \bar {\bf m}_{ij}^0, \left(\bar B_3\right)_{ij}^0, \overline {(\rho e)}_{ij}^0 \Big)
= \frac{1}{\Delta x \Delta y} \iint_{I_{ij}} \big( \rho,{\bf m }, B_3, \rho e \big) ({\tt x},{\tt y},0) d{\tt x} d{\tt y},
\\
& \left( \bar B_1 \right)_{ij}^0 = \frac{1}{2 \Delta y} \int_{ {\tt y}_{j-1} }^{ {\tt y}_{j+1 } } B_1( {\tt x}_i,{\tt y},0) d{\tt y},
~
\left( \bar B_2 \right)_{ij}^0 = \frac{1}{2 \Delta x} \int_{ {\tt x}_{i-1} }^{ {\tt x}_{i+1 } } B_2({\tt x},{\tt y}_j,0) d {\tt x},
\\
&~\bar E_{ij}^0 = \overline {(\rho e )}_{ij}^0 + \frac12 \left( \frac{ |\bar{\bf m}_{ij}^0|^2}{\bar \rho_{ij}^0} + |\bar{\bf B}_{ij}^0|^2 \right).
\end{align*}
If the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy \eqref{eq:Lxa12}, then under the CFL condition \eqref{eq:CFL:LF2D},
the LF scheme \eqref{eq:2DMHD:LFscheme} always preserve both $\bar {\bf U}_{ij}^{n+1} \in {\mathcal G}$ and \eqref{eq:DisDivB} for all $i$, $j$ and $n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
This is a direct consequence of Theorems \ref{theo:2DMHD:LFscheme} and \ref{theo:2DDivB:LFscheme}.
\end{proof}
\subsection{High-order schemes}\label{sec:High2D}
This subsection discusses the provably PP high-order finite volume or DG schemes for the 2D MHD equations \eqref{eq:MHD}.
We will focus on the first-order forward Euler method for time discretization, and our analysis also works for
high-order explicit time discretization using the SSP methods \cite{Gottlieb2001,Gottlieb2005,Gottlieb2009}.
Towards achieving high-order [$({\tt K}+1)$-th order] spatial accuracy, the approximate solution polynomials ${\bf U}_{ij}^n ({\tt x},{\tt y})$ of degree $\tt K$
are also built usually, as approximation to the exact solution ${\bf U}({\tt x},{\tt y},t_n)$ within $I_{ij}$. Such polynomial vector ${\bf U}_{ij}^n ({\tt x},{\tt y})$
is, either reconstructed in the finite volume methods
from the cell averages $\{\bar {\bf U}_{ij}^n\}$ or evolved in the DG methods. Moreover, the cell average of ${\bf U}_{ij}^n({\tt x},{\tt y})$ over $I_{ij}$ is $\bar {\bf U}_{ij}^{n}$.
Let $\{ {\tt x}_i^{(\mu)} \}_{\mu=1}^{\tt Q}$ and $\{ {\tt y}_j^{(\mu)} \}_{\mu=1}^{\tt Q}$ denote the $\tt Q$-point Gauss quadrature nodes in the intervals $[ {\tt x}_{i-\frac12}, {\tt x}_{i+\frac12} ]$ and $[ {\tt y}_{j-\frac12}, {\tt y}_{j+\frac12} ]$, respectively, and $\{\omega_\mu\}_{\mu=1}^{\tt Q}$ be the associated weights satisfying
$\sum_{\mu=1}^{\tt Q} \omega_\mu = 1$.
With this quadrature rule for approximating the integrals of numerical fluxes on cell interfaces,
a finite volume scheme or discrete equation for the cell average in the DG method (see e.g., \cite{zhang2010b}) can be written as
\begin{equation}\label{eq:2DMHD:cellaverage}
\begin{split}
\bar {\bf U}_{ij}^{n+1} & = \bar {\bf U}_{ij}^{n}
- \frac{\Delta t_n}{\Delta x} \sum\limits_{\mu =1}^{\tt Q} \omega_\mu \left(
\hat {\bf F}_1( {\bf U}^{-,\mu}_{i+\frac{1}{2},j}, {\bf U}^{+,\mu}_{i+\frac{1}{2},j} ) -
\hat {\bf F}_1( {\bf U}^{-,\mu}_{i-\frac{1}{2},j}, {\bf U}^{+,\mu}_{i-\frac{1}{2},j})
\right) \\
& \quad - \frac{ \Delta t_n }{\Delta y} \sum\limits_{\mu =1}^{\tt Q} \omega_\mu \left(
\hat {\bf F}_2( {\bf U}^{\mu,-}_{i,j+\frac{1}{2}} , {\bf U}^{\mu,+}_{i,j+\frac{1}{2}} ) -
\hat {\bf F}_2( {\bf U}^{\mu,-}_{i,j-\frac{1}{2}} , {\bf U}^{\mu,+}_{i,j-\frac{1}{2}} )
\right),
\end{split}
\end{equation}
where $\hat {\bf F}_1$ and $\hat {\bf F}_2$ are the LF fluxes in \eqref{eq:LFflux}, and
the limiting values
are given by
\begin{align*}
&{\bf U}^{-,\mu}_{i+\frac{1}{2},j} = {\bf U}_{ij}^n ({\tt x}_{i+\frac12},{\tt y}_j^{(\mu)}),\qquad
{\bf U}^{+,\mu}_{i-\frac{1}{2},j} = {\bf U}_{ij}^n ({\tt x}_{i-\frac12},{\tt y}_j^{(\mu)}),
\\
&{\bf U}^{\mu,-}_{i,j+\frac{1}{2}} = {\bf U}_{ij}^n ({\tt x}_i^{(\mu)},{\tt y}_{j+\frac12}),\qquad
{\bf U}^{\mu,+}_{i,j-\frac{1}{2}} = {\bf U}_{ij}^n ({\tt x}_i^{(\mu)},{\tt y}_{j-\frac12}).
\end{align*}
For the accuracy requirement, $\tt Q$ should satisfy:
${\tt Q} \ge {\tt K}+1$ for a $\mathbb{P}^{\tt K}$-based DG method,
or ${\tt Q} \ge ({\tt K}+1)/2$ for a $({\tt K}+1)$-th order finite volume scheme.
We denote
$$
\overline{(B_1)}_{i+\frac{1}{2},j}^{\mu} := \frac12 \left( (B_1)_{i+\frac{1}{2},j}^{-,\mu} + (B_1)_{i+\frac{1}{2},j}^{+,\mu} \right),
\ \
\overline{ ( B_2) }_{i,j+\frac{1}{2}}^{\mu} := \frac12 \left( ( B_2)_{i,j+\frac{1}{2}}^{\mu,-} + ( B_2)_{i,j+\frac{1}{2}}^{\mu,+} \right),
$$
and
define the discrete divergences of the numerical magnetic field ${\bf B}^n( {\tt x},{\tt y})$ as
\begin{align*}
&
{\rm div} _{ij} {\bf B}^n := \frac{\sum\limits_{\mu=1}^{\tt Q} \omega_\mu \left( \overline{(B_1)}_{i+\frac{1}{2},j}^{\mu}
- \overline{(B_1)}_{i-\frac{1}{2},j}^{\mu} \right)}{\Delta x} + \frac{\sum \limits_{\mu=1}^{\tt Q} \omega_\mu \left( \overline{ ( B_2)}_{i,j+\frac{1}{2}}^{\mu}
- \overline{ ( B_2)}_{i,j-\frac{1}{2}}^{\mu} \right)}{\Delta y} ,
\end{align*}
which is an approximation to the left side of \eqref{eq:div000} with
$({\tt x}_0,{ \tt y}_0)$ taken as $({\tt x}_{i-\frac12},{\tt y}_{j-\frac12})$.
\iffalse
It is noticed that
$$
{\rm div} _{ij} {\bf B}^n = \frac12 \big( {\rm div}_{ij}^{\rm in} {\bf B}^n
+ {\rm div}_{ij}^{\rm out} {\bf B}^n
\big),
$$
where
\begin{align*}
&
{\rm div} _{ij}^{\rm in} {\bf B}^n := \frac{\sum\limits_{\mu=1}^{\tt Q} \omega_\mu \left( (B_1)_{i+\frac{1}{2},j}^{-,\mu}
- (B_1)_{i-\frac{1}{2},j}^{+,\mu} \right)}{\Delta x} + \frac{\sum \limits_{\mu=1}^{\tt Q} \omega_\mu \left(
( B_2)_{i,j+\frac{1}{2}}^{\mu,-}
- ( B_2)_{i,j-\frac{1}{2}}^{\mu,+} \right)}{\Delta y} ,
\\
&
{\rm div} _{ij}^{\rm out} {\bf B}^n := \frac{\sum\limits_{\mu=1}^{\tt Q} \omega_\mu \left( (B_1)_{i+\frac{1}{2},j}^{+,\mu}
- (B_1)_{i-\frac{1}{2},j}^{-,\mu} \right)}{\Delta x} + \frac{\sum \limits_{\mu=1}^{\tt Q} \omega_\mu \left(
( B_2)_{i,j+\frac{1}{2}}^{\mu,+}
- ( B_2)_{i,j-\frac{1}{2}}^{\mu,-} \right)}{\Delta y} .
\end{align*}
\fi
Let $\{ \hat {\tt x}_i^{(\nu) }\}_{\nu=1} ^ {\tt L}$ and $\{ \hat {\tt y}_j^{(\nu)} \}_{\nu=1} ^{\tt L}$ be the $\tt L$-point Gauss-Lobatto quadrature nodes in the intervals
$[{\tt x}_{i-\frac{1}{2}},{\tt x}_{i+\frac{1}{2}}]$ and $[{\tt y}_{j-\frac{1}{2}},{\tt y}_{j+\frac{1}{2}} ]$ respectively, and
$ \{\hat \omega_\nu\}_{\nu=1} ^ {\tt L}$ be associated weights satisfying $\sum_{\nu=1}^{\tt L} \hat\omega_\nu = 1$, where ${\tt L}\ge \frac{{\tt K}+3}2$ such that the
associated quadrature has algebraic precision of at least degree ${\tt K}$.
Then we have the following sufficient
conditions for that
the high-order scheme \eqref{eq:2DMHD:cellaverage} is PP.
\begin{theorem} \label{thm:PP:2DMHD}
If the polynomial vectors $\{{\bf U}_{ij}^n({\tt x},{\tt y})\}$ satisfy:
\begin{align}\label{eq:DivB:cst12}
&
\mbox{\rm div} _{ij} {\bf B}^n = 0, \quad \forall~i,j,
\\
&{\bf U}_{ij}^n ( \hat {\tt x}_i^{(\nu)},{\tt y}_j^{(\mu)} ),~{\bf U}_{ij}^n ( {\tt x}_i^{(\mu)}, \hat {\tt y}_j^{(\nu)} ) \in {\mathcal G},\quad \forall~i,j,\mu,\nu,
\label{eq:2Dadmissiblity}
\end{align}
then the scheme \eqref{eq:2DMHD:cellaverage} always preserves $\bar{\bf U}_{ij}^{n+1} \in {\mathcal G}$ under the CFL condition
\begin{equation}\label{eq:CFL:2DMHD}
0< \frac{\alpha_{1,n}^{\tt LF} \Delta t_n}{\Delta x} + \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n}{\Delta y} \le \hat \omega_1,
\end{equation}
where the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\begin{equation}\label{eq:2DhighLFpara}
\alpha_{1,n}^{\tt LF} >
\max_{ i,j,\mu} \alpha_1 \big( { \bf U }_{i+\frac12,j}^{\pm,\mu} , { \bf U }_{i-\frac12,j}^{\pm,\mu} \big)
,\quad \alpha_{2,n}^{\tt LF} > \max_{ i,j,\mu} \alpha_2 \big( { \bf U }_{i,j+\frac12}^{\mu,\pm} , { \bf U }_{i,j-\frac12}^{\mu,\pm} \big).
\end{equation}
\end{theorem}
\begin{proof}
Using the exactness of the Gauss-Lobatto quadrature rule with $\tt L$ nodes and the Gauss quadrature rule with $\tt Q$ nodes for the polynomials of degree $\tt K$,
one can derive (cf. \cite{zhang2010b} for more details) that
\begin{equation} \label{eq:U2Dsplit}
\begin{split}
\bar{\bf U}_{ij}^n
&= \frac{\lambda_1}{\lambda} \sum \limits_{\nu = 2}^{{\tt L}-1} \sum \limits_{\mu = 1}^{\tt Q} \hat \omega_\nu \omega_\mu {\bf U}_{ij}^n\big(\hat {\tt x}_i^{(\nu)},{\tt y}_j^{(\mu)}\big) + \frac{\lambda_2}{\lambda} \sum \limits_{\nu = 2}^{{\tt L}-1} \sum \limits_{\mu = 1}^{\tt Q} \hat \omega_\nu \omega_\mu {\bf U}_{ij}^n\big( {\tt x}_i^{(\mu)},\hat {\tt y}_j^{(\nu)} \big) \\
&\quad + \frac{\lambda_1 \hat \omega_1}{\lambda} \sum \limits_{\mu = 1}^{\tt Q} \omega_\mu \left( {\bf U}_{i-\frac{1}{2},j}^{+,\mu} +
{\bf U}_{i+\frac{1}{2},j}^{-,\mu} \right)
+ \frac{\lambda_2 \hat \omega_1}{\lambda} \sum \limits_{\mu = 1}^{\tt Q} \omega_\mu \left( {\bf U}_{i,j-\frac{1}{2}}^{\mu,+} +
{\bf U}_{i,j+\frac{1}{2}}^{\mu,-} \right) ,
\end{split}
\end{equation}
where $\hat \omega_1 = \hat \omega_{\tt L}$ is used, and $\lambda_1 = \frac{ \alpha_{1,n}^{\tt LF} \Delta t_n } { \Delta x}, \lambda_2 = \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n }{\Delta y}, \lambda = \lambda_1+ \lambda_2 \in (0,\hat \omega_1] $ by \eqref{eq:CFL:2DMHD}.
After substituting \eqref{eq:LFflux} and \eqref{eq:U2Dsplit} into \eqref{eq:2DMHD:cellaverage}, we rewrite the scheme \eqref{eq:2DMHD:cellaverage} by technical arrangement into the following convex combination form
\begin{align} \label{eq:2DMHD:split:proof}
&\bar{\bf U}_{ij}^{n+1}
=
\sum \limits_{\nu = 2}^{{\tt L}-1} \hat \omega_\nu {\bf \Xi}_\nu
+
2( \hat \omega_1 - \lambda ) {\bf \Xi}_{\tt L}
+ 2 \lambda {\bf \Xi}_1 ,
\end{align}
where ${\bf \Xi}_1 = \frac12 \left( {\bf \Xi}_- + {\bf \Xi}_+ \right)$, and
\begin{align*}
&{\bf \Xi}_\nu
=
\frac{\lambda_1}{\lambda} \sum \limits_{\mu = 1}^{\tt Q} \omega_\mu {\bf U}_{ij}^n \big(\hat {\tt x}_i^{(\nu)},{\tt y}_j^\mu\big) + \frac{\lambda_2}{\lambda}
\sum \limits_{\mu = 1}^{\tt Q} \omega_\mu {\bf U}_{ij}^n \big( {\tt x}_i^{(\mu)},\hat {\tt y}_j^{(\nu)} \big),\quad 2 \le \nu \le {\tt L}-1,
\\ \nonumber
\begin{split}
&{\bf \Xi}_{\tt L} =
\frac{1}{ 2\lambda }
\sum\limits_{\mu=1}^{\tt Q} { \omega _\mu }
\bigg(
\lambda_1 \left(
{ \bf U }_{i+\frac12,j}^{-,\mu}
+ { \bf U }_{i-\frac12,j}^{+,\mu}
\right)
+ \lambda_2 \left(
{ \bf U }_{i,j+\frac12}^{\mu,-}
+ { \bf U }_{i,j-\frac12}^{\mu,+}
\right)
\bigg),
\end{split}
\\ \nonumber
\begin{split}
&
{\bf \Xi}_\pm =
\frac{1}{ 2\left( \frac{\alpha_{1,n}^{\tt LF} }{\Delta x} + \frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \right)}
\sum\limits_{\mu=1}^{\tt Q} { \omega _\mu }
\left[
\frac{\alpha_{1,n}^{\tt LF}}{\Delta x} \left(
{ \bf U }_{i+\frac12,j}^{\pm,\mu} - \frac{ {\bf F}_1 ( { \bf U }_{i+\frac12,j}^{\pm,\mu} ) }{ \alpha_{1,n}^{\tt LF} }
+ { \bf U }_{i-\frac12,j}^{\pm,\mu} + \frac{ {\bf F}_1 ( { \bf U }_{i-\frac12,j}^{\pm,\mu} ) }{ \alpha_{1,n}^{\tt LF} }
\right)\right.
\\%&\quad \quad \quad \quad \quad
&\qquad + \left.
\frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \left(
{ \bf U }_{i,j+\frac12}^{\mu,\pm} - \frac { {\bf F}_2 ( { \bf U }_{i,j+\frac12}^{\mu,\pm} ) } { \alpha_{2,n}^{\tt LF} }
+ { \bf U }_{i,j-\frac12}^{\mu,\pm} + \frac { {\bf F}_2 ( { \bf U }_{i,j-\frac12}^{\mu,\pm} ) } { \alpha_{2,n}^{\tt LF} }
\right)
\right].
\end{split}
\end{align*}
The condition \eqref{eq:2Dadmissiblity} implies ${\bf \Xi}_\nu \in {\mathcal G},~2 \le \nu \le {\tt L}$, because $\mathcal G$ is convex.
In order to show the admissibility of ${\bf \Xi}_1$ by using
Theorem \ref{theo:MHD:LLFsplit2D},
one has to verify the corresponding discrete divergence-free condition,
which is found to be \eqref{eq:DivB:cst12}.
Hence ${\bf \Xi}_1 \in{\mathcal G}$.
This means the form \eqref{eq:2DMHD:split:proof} is a convex combination of the admissible states $\{{\bf \Xi}_k,1\le k \le {\tt L}\}$.
It follows from the convexity of $\mathcal G$ that $ \bar{\bf U}_{ij}^{n+1} \in {\mathcal G}$.
The proof is completed.
\end{proof}
\begin{remark} \label{rem:wu_aaa}
For some other hyperbolic systems such as
the Euler \cite{zhang2010b} and shallow water \cite{Xing2010}
equations, the condition \eqref{eq:2Dadmissiblity}
is sufficient to ensure the positivity of 2D high-order
schemes.
However, contrary to
the usual
expectation (e.g., \cite{cheng}),
the condition \eqref{eq:2Dadmissiblity} is
not sufficient in the ideal MHD case, even if ${\bf B}^n_{ij}({\tt x},{\tt y})$ is locally divergence-free.
This is indicated by Theorem \ref{theo:counterEx} and confirmed by the numerical experiments in the Section \ref{sec:examples},
and demonstrates the necessity
of \eqref{eq:DivB:cst12} to some extent.
\end{remark}
\begin{remark} \label{rem:notDDF}
In practice, the
condition \eqref{eq:2Dadmissiblity}
can be easily met via a simple scaling limiting procedure
\cite{cheng}.
It is not easy to meet \eqref{eq:DivB:cst12} because it
depends on the limiting values of the magnetic field calculated from the four neighboring cells
of $I_{ij}$.
If ${\bf B}^n({\tt x},{\tt y})$ is globally
divergence-free,
i.e., locally divergence-free in each cell with
normal magnetic component continuous across the cell interfaces,
then by Green's theorem, \eqref{eq:DivB:cst12} is naturally satisfied.
However, the PP limiting technique with local scaling
may
destroy the globally divergence-free property of ${\bf B}^n({\tt x},{\tt y})$.
Hence, it is nontrivial and still open to design a limiting procedure
for the polynomials
$\{{\bf U}_{ij}^n({\tt x},{\tt y})\}$ which can enforce the conditions \eqref{eq:2Dadmissiblity} and \eqref{eq:DivB:cst12} at the same time.
As a continuation of this work, Ref. \cite{WuShu2018} reports
our achievement in developing
multi-dimensional probably PP high-order schemes via the discretization of symmetrizable ideal MHD equations.
\end{remark}
We now derive a lower bound of the internal energy when the
proposed DDF condition \eqref{eq:DivB:cst12} is not satisfied, to show that
negative internal energy may be more easily computed in the cases with large $|{\bf v}\cdot {\bf B}|$ and large discrete divergence error.
\begin{theorem} \label{thm:PP:2DMHD:further}
Assume that the polynomial vectors $\{{\bf U}_{ij}^n({\tt x},{\tt y})\}$ satisfy
\eqref{eq:2Dadmissiblity}, and the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\eqref{eq:2DhighLFpara}.
Then under the CFL condition \eqref{eq:CFL:2DMHD},
the solution $\bar{\bf U}_{ij}^{n+1}$
of the scheme \eqref{eq:2DMHD:cellaverage} satisfies
that $\bar \rho_{ij}^{n+1} > 0$,
and
\begin{equation}\label{eq:2222}
{\mathcal E} ( \bar{\bf U}_{ij}^{n+1} ) > -\Delta t_n \big(\bar{\bf v}_{ij}^{n+1} \cdot
\bar{\bf B}_{ij}^{n+1} \big) {\rm div}_{ij}{\bf B}^n,
\end{equation}
where the lower bound dominates the negativity of ${\mathcal E} ( \bar{\bf U}_{ij}^{n+1} )$, and
$\bar{\bf v}_{ij}^{n+1} :=\bar{\bf m}_{ij}^{n+1}/\bar{\rho}^{n+1}_{ij}$.
\end{theorem}
\begin{proof}
It is seen from \eqref{eq:2DMHD:split:proof} that
$\bar \rho_{ij}^{n+1}$ is a convex combination of
the first components of ${\bf \Xi}_\nu, 1 \le \nu \le {\tt L}$, which are
all positive. Thus $\bar \rho_{ij}^{n+1} > 0$. For any ${\bf v}^*,{\bf B}^* \in \mathbb{R}^3$,
$$
\bigg( {\bf \Xi}_1 \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg) \times 2\left( \frac{\alpha_{1,n}^{\tt LF} }{\Delta x} + \frac{\alpha_{2,n}^{\tt LF}}{\Delta y} \right)
> - ({\bf v}^* \cdot {\bf B}^*) {\rm div}_{ij}{\bf B}^n,
$$
whose derivation is similar to that of Theorem \ref{theo:MHD:LLFsplit2D}.
Because ${\bf \Xi}_\nu \in {\mathcal G},~2 \le \nu \le {\tt L}$, we deduce
from \eqref{eq:2DMHD:split:proof} that
{\small
\begin{align*}
&\bar{\bf U}_{ij}^{n+1} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}
=
\sum \limits_{\nu = 2}^{{\tt L}-1} \hat \omega_\nu
\bigg( {\bf \Xi}_\nu \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg)
\\
& \quad +
2( \hat \omega_1 - \lambda ) \bigg( {\bf \Xi}_{\tt L} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg)
+ 2 \lambda \bigg( {\bf \Xi}_1 \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg)
\\
& > 2 \lambda \bigg( {\bf \Xi}_1 \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2} \bigg)
> - \Delta t_n ({\bf v}^* \cdot {\bf B}^*) {\rm div}_{ij}{\bf B}^n.
\end{align*}}
Taking ${\bf v}^*= \bar{\bf v}^{n+1}_{ij} $
and ${\bf B}^* = \bar{\bf B}^{n+1}_{ij}$ gives \eqref{eq:2222}.
\end{proof}
\section{Numerical experiments}\label{sec:examples}
Several numerical examples are provided in this section
to further confirm the above PP analysis.
\subsection{1D case} We first give several 1D numerical examples.
\subsubsection{Simple example}
This is a simple example that one can verify by hand with a calculator.
It is used to numerically
confirm the conclusion in Theorem \ref{eq:1DnotPP},
and show that the 1D Lax-Friedrichs (LF) scheme
with the standard numerical viscosity parameter is not PP in general.
We consider the 1D data in \eqref{eq:exampleUUU} with
$\gamma =1.4$ and ${\tt p}=10^{-5}$, and then verify the
pressure of $\bar {\bf U}_{j}^{n+1}$ computed by
the LF scheme \eqref{eq:1DMHD:LFscheme} with the standard parameter $\alpha_{1,n}^{\tt LF}=\max_j {\mathscr{R}}_1 ( \bar {\bf U}_{j}^n ) $.
The pressure $\bar p_{j}^{n+1}$ obtained by using different CFL numbers ${\tt C}
\in \{ 0.001,0.002,\cdots, 1 \}$ are displayed in the left figure of Fig. \ref{fig:valcex1}.
It is seen that the LF scheme \eqref{eq:1DMHD:LFscheme} with the standard parameter
fails to guarantee the positivity of pressure, even though very small CFL number is used. However, when $\alpha_{1,n}^{\tt LF}$
satisfies the proposed condition \eqref{eq:Lxa1}, as expected by Theorem \ref{theo:1DMHD:LFscheme}, the positivity is always preserved for any CFL number less than one, see the right figure of Fig. \ref{fig:valcex1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.43\textwidth]{cEx1}~~~~~~
\includegraphics[width=0.43\textwidth]{cEx1my}
\caption{\small
The pressure $\bar p_{j}^{n+1}$ obtained by the LF scheme \eqref{eq:1DMHD:LFscheme} with different parameter $\alpha_{1,n}^{\tt LF}$ and using different CFL numbers. Left:
$\alpha_{1,n}^{\tt LF}=\max_j {\mathscr{R}}_1 ( \bar {\bf U}_{j}^n ) $; right:
$\alpha_{1,n}^{\tt LF}=\max_{j} \alpha_1 \big( \bar {\bf U}_{j+1}^n, \bar {\bf U}_{j-1}^n;
\sqrt{ \bar{ \rho }_{j+1}^n } /{( \sqrt{ \bar{ \rho }_{j+1}^n } +
\sqrt{ \bar{ \rho }_{j-1}^n } )} \big)$.
}\label{fig:valcex1}
\end{figure}
In the following, we conduct numerical experiments on several 1D MHD problems
with low density, low pressure, strong discontinuity, and/or low plasma-beta
$\beta := 2p/|{\bf B}|^2 $,
to demonstrate the accuracy and robustness of the 1D provenly PP high-order methods.
Without loss of generality, we take the third-order (${\mathbb P}^2$-based),
discontinuous Galerkin (DG) method, together with the third-order explicit
strong stability preserving (SSP) Runge-Kutta time discretization \cite{Gottlieb2009}, as our base scheme.
The LF flux \eqref{eq:LFflux} is used with the
numerical viscosity parameters satisfying the condition \eqref{eq:LxaH}.
The PP limiter in \cite{cheng} is employed to enforce the condition
\eqref{eq:1DDG:con2}. According to our analysis in Theorem \ref{thm:PP:1DMHD},
the resulting DG scheme is PP.
Unless otherwise stated, all the computations are restricted to the ideal equation
of state \eqref{eq:EOS},
and the CFL number is taken as 0.15.
\subsubsection{Accuracy test}
A smooth problem is tested to verify the accuracy of the third-order DG method.
It is similar to the one simulated in \cite{zhang2010b} for testing the PP DG scheme for the Euler equations. The exact solution is given by
\begin{equation*}
(\rho,{\bf v},p,{\bf B}) ({\tt x},t)=
(1+0.99\sin({\tt x}-t),~1,~0,~0,~1,~0.1,~0,~0),\quad {\tt x}\in[0,2\pi],~t\ge 0,
\end{equation*}
which describes a MHD sine wave propagating with low density and $\gamma=1.4$.
Table \ref{tab:1Dacc1} lists the numerical errors at $t=0.1$
in the numerical density and the corresponding convergence rates
for the PP third-order DG method at different grid resolutions.
The results show that the expected convergence order is achieved.
\begin{table}[htbp]\label{tab:1Dacc1}
\centering
\caption{\small
Numerical errors at $t=0.1$ in the density and corresponding convergence rates for
the 1D PP third-order DG method at
different grid resolutions.
}
\label{tab:order}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
~Mesh~&~$l^1$-error~& ~order~ &~$l^2$-error~&~order~&~$l^\infty$-error~&~order~ \\
\hline
$40$ & 2.1268e-4 & -- & 9.5354e-5 & -- & 5.9715e-5 & -- \\
$80$ & 3.7004e-5 & 2.52 & 1.6502e-5& 2.53 & 1.0401e-5 & 2.52 \\
$160$ & 5.1857e-6 & 2.84 & 2.3121e-6& 2.84 & 1.4582e-6 & 2.83\\
$320$ & 6.6087e-7 & 2.97 & 2.9467e-7& 2.97 & 1.8587e-7 & 2.97 \\
$640$ & 8.2817e-8 & 3.00 & 3.6926e-8& 3.00 & 2.3292e-8 & 3.00 \\
$1280$ & 1.0358e-8 & 3.00 &4.6185e-9& 3.00 & 2.9133e-9 & 3.00\\
\hline
\end{tabular}
\end{table}
\subsubsection{Positivity-preserving tests}
Two extreme 1D Riemann problems are solved to verify the robustness and
PP property of the PP third-order DG scheme.
The first is a 1D vacuum shock tube
problem \cite{Christlieb} with $\gamma=\frac53$ and the initial data given by
\begin{equation*}
(\rho,{\bf v},p,{\bf B}) ({\tt x},0)
=\begin{cases}
(10^{-12},~0,~0,~0,~10^{-12},~0,~0,~0), \quad & x<0,
\\
(1,~0,~0,~0,~0.5,~0,~1,~0), \quad & x>0.
\end{cases}
\end{equation*}
We use this example to demonstrate that the PP DG scheme can handle extremely low density and pressure.
The computational domain
is taken as $[-0.5,0.5]$.
Fig. \ref{fig:1DRP1} displays the density and pressure of the numerical solution on the mesh of $200$ cells as well as
the highly resolved solution with $2000$ cells at $t=0.1$.
In comparison with the results in \cite{Christlieb},
the low pressure and the low density are both captured correctly and well.
The solutions of low
resolution and high resolution are in good agreement.
The PP third-order DG method works very robustly during the whole simulation.
If the PP limiter is not employed to enforce the condition
\eqref{eq:1DDG:con2}, the method breaks down within a few time steps.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{VST_d}
\includegraphics[width=0.49\textwidth]{VST_p}
\caption{\small
The density (left) and pressure (right) obtained by the PP third-order DG method
on the meshes of $200$ cells (symbols ``$\circ$'') and $2000$ cells (solid lines), respectively.
}\label{fig:1DRP1}
\end{figure}
The second Riemann problem is extended from the Leblanc problem \cite{zhang2010b} of gas dynamics by adding a strong magnetic field. The initial condition is
\begin{equation*}
(\rho,{\bf v},p,{\bf B}) ({\tt x},0)
=\begin{cases}
(2,~0,~0,~0,~10^{9},~0,~5000,~5000), \quad & x<0,
\\
(0.001,~0,~0,~0,~1,~0,~5000,~5000), \quad & x>0.
\end{cases}
\end{equation*}
The adiabatic index $\gamma=1.4$, and the computational domain
is $[-10,10]$. There exists a very large jump in the initial pressure, and the plasma-beta at the right state is extremely low
($\beta = 4 \times 10^{-8}$).
Successfully simulating this problem is a challenge.
As the exact solution contains strong discontinuities,
the WENO limiter \cite{Qiu2005} is implemented right before the PP limiting procedure
with the aid of the local characteristic decomposition within the ``trouble'' cells adaptively detected by the indicator in \cite{Krivodonova}.
To fully resolve the wave structure, a fine mesh is required for such test, see e.g., \cite{zhang2010b}.
Fig. \ref{fig:1DRP2} gives the numerical results
at $t=0.00003$ obtained by the PP third-order DG method using
$3200$ cells and $10000$ cells, respectively. We observe
that the strong discontinuities are well captured, and
the low
resolution and high resolution are in good agreement.
In this extreme test, it is also necessary to use the PP limiter to meet the
condition \eqref{eq:1DDG:con2}, otherwise the DG method will break down quickly
due to negative numerical pressure.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.96\textwidth]{LST_all}
\caption{\small
Numerical results at $t=0.00003$ obtained by the PP third-order DG method using $3200$ cells (symbols ``$\circ$'') and $10000$ cells (solid lines).
Top left: density; top right: log plot of density;
bottom left: velocity $v_1$; bottom right: magnetic pressure.
}\label{fig:1DRP2}
\end{figure}
\subsection{2D case}
We now present several 2D numerical examples to further
confirm our theoretical analysis and the importance of the proposed
discrete divergence-free (DDF) condition.
\subsubsection{Simple example}
This is a simple test which can be repeated easily by interested readers.
We consider the 2D discrete data in \eqref{eq:counterEx2D}
with $\gamma=1.4$, $\epsilon = 10^{-3}$ and ${\tt p}=10^{-8}$. The states in this data
are admissible, and are slight perturbations of the constant state
$(1,1,0,0,1,0,0,1+ 2.5 {\tt p} )^\top$
so that the proposed DDF condition \eqref{eq:DisDivB} is not satisfied.
We then check the
pressure $\bar p_{ij}^{n+1}$ computed by
the 2D LF scheme \eqref{eq:2DMHD:LFscheme} with $\Delta x=\Delta y$
and different CFL numbers ${\tt C} \in \{0.01,0.02,\cdots,1\}$.
The results are shown in Fig. \ref{fig:valcex2},
where two sets of parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ are considered.
It can be observed that, though the
parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy the condition \eqref{eq:Lxa12}
or are even much larger,
the admissibility of all the discrete states at the time level $n$
cannot ensure the positivity of numerical pressure at the next level. This
confirms Theorem \ref{theo:counterEx} and the importance of DDF condition \eqref{eq:DisDivB}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.43\textwidth]{cEx2a}~~~~~
\includegraphics[width=0.44\textwidth]{cEx2}
\caption{\small
The pressure $\bar p_{ij}^{n+1}$ obtained by the 2D LF scheme
\eqref{eq:2DMHD:LFscheme} with larger numerical viscosity parameters and using different CFL numbers. Left:
$\alpha_{\ell,n}^{\tt LF} = 2 \max_{ij}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$; right:
$\alpha_{\ell,n}^{\tt LF} = 50 \max_{ij}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$.
}\label{fig:valcex2}
\end{figure}
In the following, we consider several more practical examples to further verify
our theoretical findings for 2D PP high-order schemes, and to
seek the numerical evidences for
that only enforcing the condition \eqref{eq:2Dadmissiblity}
is not sufficient to achieve PP high-order conservative scheme.
To this end, we take
the locally divergence-free DG methods \cite{Li2005}, together with the third-order SSP
Runge-Kutta time discretization \cite{Gottlieb2009}, as the base schemes.
We use the PP limiter in \cite{cheng} to enforce the condition \eqref{eq:2Dadmissiblity}.
As we have discussed in Section \ref{sec:High2D},
the resulting high-order DG schemes do
not always preserve the positivity of pressure under all circumstances,
because the proposed DDF condition \eqref{eq:DivB:cst12} is not always satisfied
(although the numerical magnetic field is locally divergence-free
within each cell).
It is worth mentioning that the locally divergence-free property and
the PP limiter can enhance, to a certain extent, the robustness of high-order DG methods.
Without loss of generality, the third-order (${\mathbb P}^2$-based) DG method is considered. Unless otherwise stated, all the computations are restricted to the ideal
equation of state \eqref{eq:EOS} with the adiabatic index $\gamma=\frac53$,
and the CFL number is taken as 0.15.
For the problems involving discontinuity, before using the PP limiter, the WENO limiter \cite{Qiu2005} with locally divergence-free reconstruction (cf. \cite{ZhaoTang2017}) is also implemented with the aid of the local characteristic decomposition,
to enhance the numerical stability of high-oder DG methods in resolving the strong discontinuities and their interactions. The WENO limiter is only used
in the ``trouble'' cells adaptively detected by the indicator in \cite{Krivodonova}.
\subsubsection{Accuracy tests}
Two smooth problems are solved to test the accuracy of the ${\mathbb P}^2$-based
DG method with the PP limiter.
The first problem, similar to the one simulated in \cite{zhang2010b},
describes a MHD sine wave periodicly propagating within the domain $[0,2\pi]^2$ and $\gamma=1.4$.
The exact solution is given by
\begin{equation*}
(\rho,{\bf v},p,{\bf B})({\tt x},{\tt y},t)
= \big( 1+0.99\sin({\tt x}+{\tt y}-2t),~1,~1,~0,~1,~0.1,~0.1,~0\big).
\end{equation*}
The second problem is the vortex problem \cite{Christlieb}.
The initial condition is a mean flow
\begin{equation*}
(\rho,{\bf v},p,{\bf B})({\tt x},{\tt y},0)
=(1,~1,~1,~0,~1,~0,~0,~0),
\end{equation*}
with vortex perturbations on $v_1, v_2, B_1, B_2,$ and $p$:
\begin{gather*}
(\delta v_1, \delta v_2)=\frac{\mu}{ \sqrt{2} \pi} {\rm e}^{0.5(1-r^2)} (-{\tt y},{\tt x}),
\quad
(\delta B_1,\delta B_2) = \frac{\mu}{2\pi} {\rm e}^{0.5(1-r^2)} (-{\tt y},{\tt x}),
\\
\delta p = -\frac{ \mu^2 (1+r^2) }{8 \pi^2} {\rm e}^{1-r^2},
\end{gather*}
where $r=\sqrt{{\tt x}^2+{\tt y}^2}$, and
the vortex strength
$\mu=5.389489439$
such that the lowest pressure in the center of the vortex is about $5.3 \times 10^{-12}$.
The computational domain is $[-10,10]^2$ with periodic boundary conditions.
Fig. \ref{fig:2Dacc} displays the
numerical errors obtained by the third-order DG method with the PP limiter at different grid resolutions.
The results show that the expected convergence order is achieved, and
the PP limiter does not destroy the accuracy.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{2DSmooth1}
\includegraphics[width=0.49\textwidth]{2DSmooth2}
\caption{\small
Numerical errors obtained by the third-order DG method at different grid resolutions with $N\times N$ cells. Left: the first 2D smooth problem at $t=0.1$; right:
the second 2D smooth problem at $t=0.05$. The horizontal axis denotes the value of $N$.
}\label{fig:2Dacc}
\end{figure}
\subsubsection{Benchmark tests}
The Orszag-Tang problem (see e.g., \cite{Li2005}) and the rotor problem \cite{BalsaraSpicer1999} are benchmark tests widely performed in the literature. Although not extreme, they are simulated by our third-order DG code
to verify the high-resolution of the DG method as well as the correctness of our code.
The contour plots of the density are shown
in Fig. \ref{fig:standardtests} and agree well with those computed in \cite{BalsaraSpicer1999,Li2005}. We observe that the PP limiter does not get turned on, as the condition \eqref{eq:2Dadmissiblity}
is automatically satisfied in these simulations.
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.98\textwidth]{OT_RO_d}}
\caption{\small Numerical solutions of the
Orszag-Tang problem with $200\times 200$ cells at $t=2$ (left) and the rotor problem with $400\times 400$ at $t=0.295$ (right) computed by the third-order DG method.}
\label{fig:standardtests}
\end{figure}
\subsubsection{Positivity-preserving tests}
Two extreme problems are solved to demonstrate
the importance of the proposed conditions \eqref{eq:DivB:cst12}--\eqref{eq:2Dadmissiblity}
in Theorem \ref{thm:PP:2DMHD} and validate the estimate in Theorem \ref{thm:PP:2DMHD:further}.
The first one is the blast problem \cite{BalsaraSpicer1999} to verify
the importance of enforcing the condition \eqref{eq:2Dadmissiblity}
for achieving PP high-order DG methods.
This problem describes the propagation of a
circular strong fast magneto-sonic shock formulates and propagates into
the ambient
plasma with low plasma-beta.
Initially, the computational domain $[-0.5,0.5]^2$
is filled with plasma at rest with unit density and adiabatic index $\gamma=1.4$. The explosion zone
$(r<0.1)$ has a pressure of $10^3$, while the ambient medium $(r>0.1)$ has a lower pressure of $0.1$, where $r=\sqrt{{\tt x}^2+{\tt y}^2}$. The magnetic field is initialized in the $\tt x$-direction as $100/\sqrt{4\pi}$.
Figure \ref{fig:BL1} shows the numerical results at $t=0.01$ computed by the third-order DG method with the PP limiter on the mesh of $320\times320$
uniform cells.
We see that
the results are highly in agreement with those displayed
in \cite{BalsaraSpicer1999,Li2011,Christlieb},
and the density profile is well captured with much less oscillations than those shown in \cite{BalsaraSpicer1999,Christlieb}.
It is noticed that
the third-order DG method fails to preserve the positivity of pressure at time $t \approx 2.845\times 10^{-4}$
if the PP limiting procedure is not employed to enforce the condition \eqref{eq:2Dadmissiblity}.
\begin{figure}[htbp]
\centering
{\includegraphics[width=0.49\textwidth]{BL1_320x320_d}}
{\includegraphics[width=0.49\textwidth]{BL1_320x320_p}}
{\includegraphics[width=0.49\textwidth]{BL1_320x320_v}}
{\includegraphics[width=0.49\textwidth]{BL1_320x320_pm}}
\caption{\small Blast problem:
the contour plots of density $\rho$ (top left),
pressure $p$ (top right), velocity $|{\bf v}|$ (bottom left)
and magnetic pressure $p_m$ (bottom right) at $t=0.01$.}
\label{fig:BL1}
\end{figure}
To examine the PP property of the third-order DG scheme with the PP limiter, it is necessary to try more challenging test (rather than the standard tests).
In a high Mach number jet with strong magnetic field,
the internal energy is very
small compared to the huge kinetic and magnetic energy,
negative pressure may easily appear in the numerical simulation.
We consider the Mach 800 dense jet in \cite{Balsara2012}, and add a magnetic field so as to simulate the MHD jet flows.
Initially, the computational domain $[-0.5,0.5]\times[0,1.5]$ is
filled with a static uniform medium with density of $0.1\gamma$ and unit pressure,
where the adiabatic index $\gamma=1.4$.
A dense jet is injected in the $\tt y$-direction through the inlet part ($\left|{\tt x}\right|<0.05$)
on the bottom boundary (${\tt y}=0$) with density of $\gamma$, unit pressure and speed of $800$.
The fixed inflow condition
is specified on the nozzle $\{{\tt y}=0,\left|{\tt x}\right|<0.05\}$, and the
other boundary conditions are outflow.
A magnetic field with a magnitude
of $B_a$ is initialized along the $\tt y$-direction.
The presence of magnetic field makes this test more extreme.
A larger $B_a$ implies a larger value of $|{\bf v} \cdot {\bf B}|=800 B_a$,
which more easily leads to
negative numerical pressure when the DDF condition \eqref{eq:DivB:cst12}
is violated seriously, as indicated by Theorem \ref{thm:PP:2DMHD:further}.
Therefore, we have a strong motivation to
examine the PP property by using this kind of problems.
We first consider a relatively mild setup with a weak magnetic field $B_a=\sqrt{20}$.
The corresponding plasma-beta ($\beta=0.1$) is not very small.
The locally divergence-free DG method with the PP limiter
works well for this weak magnetized case, see Fig. \ref{fig:jet1}, which shows
the results at $t=0.002$ on the mesh of $400 \times 600$ cells.
It is seen that the density, gas pressure and velocity profiles
are very close to those displayed in \cite{Balsara2012} for the same jet but without
magnetic field.
This is not surprising, because the magnetic field is weak in our case.
In this simulation, it is also necessary to enforce the condition
\eqref{eq:2Dadmissiblity} by the PP limiter, otherwise the
DG code will break down at $t \approx 5.58 \times 10^{-5}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.98\textwidth]{jet1_all}
\caption{\small
High Mach number jet with a weak magnetic field $B_a=\sqrt{20}$.
The schlieren images of density logarithm, gas pressure logarithm,
velocity $|{\bf v}|$ and magnetic pressure (from left to right) at
$t=0.002$.
}\label{fig:jet1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\textwidth]{jet2_all}
\caption{\small
High Mach number jet with a strong magnetic field $B_a=\sqrt{200}$.
the schlieren image of $\delta_{ij}$ (left)
and its slice along ${\tt x}=0.03125$ (right).
}\label{fig:jet1b}
\end{figure}
To investigate the importance of the proposed DDF condition \eqref{eq:DivB:cst12} in
Theorem \ref{thm:PP:2DMHD}, we now try to simulate the jet in a
moderately magnetized case with $B_a=\sqrt{200}$ (the corresponding
plasma-beta $\beta = 0.01$) on the mesh of $400 \times 600$ cells.
In this case, the locally divergence-free third-order DG method with the PP limiter
breaks down at $t\approx 0.00024$.
This failure results from the computed inadmissible
cell averages of conservative variables, detected in
the four cells
centered at points
$(-0.03125, 0.11625)$, $(-0.03125, 0.11875)$, $(0.03125,0.11625)$,
and $(0.03125, 0.11875)$, respectively.
As expected from Theorem \ref{thm:PP:2DMHD:further},
these inadmissible
cell averages correspond to negative numerical pressure (internal energy)
due to the violation of DDF condition \eqref{eq:DivB:cst12}.
We recall that Theorem \ref{thm:PP:2DMHD:further} implies for
the inadmissible
cell averages that
$$
0 \ge {\mathcal E} ( \bar{\bf U}_{ij}^{n+1} ) > -\Delta t_n \big(\bar{\bf v}_{ij}^{n+1} \cdot
\bar{\bf B}_{ij}^{n+1} \big) {\rm div}_{ij}{\bf B}^n.
$$
If one defines
\begin{align*}
\begin{split}
&{\mathcal P}_{ij} :=
{\mathcal E} ( \bar{\bf U}_{ij}^{n+1} )
+ \Delta t_n \big(\bar{\bf v}_{ij}^{n+1} \cdot
\bar{\bf B}_{ij}^{n+1} \big) {\rm div}_{ij}{\bf B}^n > 0,
\\
&{\mathcal N}_{ij} :=
-\Delta t_n \big(\bar{\bf v}_{ij}^{n+1} \cdot
\bar{\bf B}_{ij}^{n+1} \big) {\rm div}_{ij}{\bf B}^n,
\end{split}
\end{align*}
and $\delta_{ij} := {\mathcal N}_{ij} / {\mathcal P}_{ij} $,
then $\delta_{ij}\le-1$ for inadmissible cell averages,
and $\delta_{ij} > -1$ for admissible cell averages.
As we see from the proof of Theorem \ref{thm:PP:2DMHD:further}, ${\mathcal N}_{ij}$ can be considered as the dominate negative part of
${\mathcal E} ( \bar{\bf U}_{ij}^{n+1} )$
affected by the discrete divergence-error ${\rm div}_{ij}{\bf B}^n$, while
${\mathcal P}_{ij}$ is the main positive part contributed by the condition
\eqref{eq:2Dadmissiblity} enforced by the PP limiter.
As evidences of these, Fig. \ref{fig:jet1b} gives the close-up of
the schlieren image of $\delta_{ij}$
and its slice along ${\tt x}=0.03125$.
It clearly shows the two subregions with small values $\delta_{ij}<-1$,
and the four detected cells with inadmissible
cell averages are exactly located in those two subregions.
This further demonstrates our analysis in
Theorem \ref{thm:PP:2DMHD:further} and that
the DDF condition \eqref{eq:DivB:cst12} is really crucial in achieving
completely PP schemes in 2D.
It is observed that the code fails also on a refined mesh, and also for
more strongly magnetized case.
More numerical results further supporting our analysis can be found in \cite{WuShu2018},
where the proposed theoretical techniques are applied
to design multi-dimensional provably PP DG schemes via the discretization of symmetrizable ideal MHD equations.
\section{Conclusions}\label{sec:con}
We presented the rigorous PP analysis of
conservative schemes with the LF flux for one- and multi-dimensional ideal
MHD equations.
It was based on several important properties of admissible state set,
including a novel equivalent form, convexity, orthogonal invariance and
the generalized LF splitting properties.
The analysis was focused on the finite volume or discontinuous Galerkin schemes on uniform Cartesian meshes.
In the 1D case, we proved that the LF scheme
with proper numerical viscosity is PP, and
the high-order schemes are PP under accessible conditions.
In the 2D case,
our analysis revealed for the first time that a discrete divergence-free (DDF) condition is
crucial for achieving
the PP property of schemes for ideal MHD.
We proved that
the 2D LF scheme with proper numerical viscosity preserves the
positivity and the DDF condition.
We derived sufficient conditions for achieving 2D PP high-order schemes. Lower bound of the internal energy was derived when the
proposed DDF condition was not satisfied, yielding that
negative internal energy may be more easily computed in the cases with large $|{\bf v}\cdot {\bf B}|$ and large discrete divergence error. Our analyses were further confirmed by the numerical examples, and extended to the 3D case.
In addition, several usually-expected properties
were disproved in this paper. Specifically, we rigorously showed that:
(i) the LF splitting property does not always hold;
(ii) the 1D LF scheme with standard numerical viscosity or
piecewise constant $B_1$ is not PP in general, no matter how small the CFL number is;
(iii) the 2D LF scheme is not always PP under any CFL condition, unless additional condition like the DDF condition is satisfied.
As a result, some existing techniques for PP analysis become
inapplicable in the MHD case.
These, together with the technical challenges arising from the solenoidal magnetic field and
the intrinsic complexity of the MHD system, make the proposed analysis very nontrivial.
From the viewpoint of preserving positivity, our analyses provided
a new understanding of the importance of divergence-free
condition in robust MHD simulations.
Our analyses and novel techniques as well as the provenly PP schemes can also be useful for investigating
or designing
other
PP schemes for ideal MHD.
In \cite{WuShu2018},
we applied the proposed analysis approach
to develop multi-dimensional probably PP high-order
methods for the symmetrizable version of the ideal
MHD equations.
The extension of the PP analysis to less dissipative
numerical fluxes and on more general/unstructured meshes will be studied in a coming paper.
\appendix
\section{Additional Proofs}
\subsection[Proof of Proposition 2.5]{{Proof of Proposition \ref{lemma:2.8}}}
\label{sec:proof}
\begin{proof}
We prove it by contradiction.
Assume that \eqref{eq:LFproperty} always holds.
Let $\alpha = \chi {\mathscr{R}}_i $ with the constant $\chi \ge 1$.
For any ${\tt p}\in \big(0,\frac1{\gamma}\big)$,
consider the admissible state
${\bf U}=(1,0,0,0,1,0,0, \frac{\tt p}{\gamma-1} + \frac{1}{2} )^\top$ for an ideal gas. Then \eqref{eq:LFproperty} implies that
\begin{equation*}
{\bf U} \pm \frac{ {\bf F}_1 ({\bf U}) }{\alpha} =
\bigg(
1,\pm \frac{ {\tt p} - \frac{1}{2} } {\chi},0,0,1,0,0,\frac{\tt p}{\gamma-1} + \frac{1}{2}
\bigg)^{\top}\in {\mathcal G}.
\end{equation*}
This means, for $\forall {\tt p}\in \big(0,\frac1{\gamma}\big)$,
$$
0< {\mathcal E} ( {\bf U} \pm \alpha^{-1} {\bf F}_1 ({\bf U}) ) = \frac{\tt p}{\gamma-1} - { \frac{1}{2\chi^2} } \bigg( {\tt p} - \frac{1}2 \bigg)^2=:{\mathcal E}_f({\tt p}) ,
$$
The continuity of ${\mathcal E}_f ({\tt p})$ further implies
$$
0 \le \mathop {\lim }\limits_{ {\tt p} \to 0^+ } {\mathcal E}_f ( {\tt p} ) = - \frac{1}{ 8\chi^2} < 0,
$$
which is a contradiction. Hence \eqref{eq:LFproperty} does not always hold.
The proof is completed.
\end{proof}
\subsection{Proof of Proposition \ref{lem:fornewGLF}}
\label{sec:proofwkl111}
\begin{proof}
The proof is similar to that of Lemma \ref{theo:MHD:LLFsplit}.
Without loss of generality, we only
show \eqref{eq:MHD:LLFsplit:one-state} for $i=1$,
while the cases of $i=2$ and $i=3$ can be then proved by using
the
orthogonal invariance in Lemma \ref{lem:MHD:zhengjiao} and similarly
to part (ii) of the proof of Lemma \ref{theo:MHD:LLFsplit}.
For $i=1$, let define
\begin{align*}
& \hat \Pi_u = {\bf U} \cdot {\bf n}^* + \frac{|{\bf B}^*|^2}{2}, \quad
\hat \Pi_f= {\bf F}_1({\bf U}) \cdot {\bf n}^* + v_i^* \frac{|{\bf B}^*|^2}{2} - B_i({\bf v}^* \cdot {\bf B}^*) .
\end{align*}
Then it only needs to show
\begin{equation}\label{eq:needproof1A}
\frac{|\hat \Pi_f|}{\hat \Pi_u} \le \widehat \alpha_1 ({\bf U},{v}_1^*),
\end{equation}
by noting that
\begin{equation}\label{eq:PiuA}
\hat \Pi_u = | \hat {\bm \theta}|^2 > 0 ,
\end{equation}
where the nonzero vector $\hat {\bm \theta} \in {\mathbb{R}}^{7}$ is defined as
$$
\hat {\bm \theta}= \frac1{\sqrt{2}} \Big(
\sqrt{\rho} ( {\bf v} - {\bf v}^* ),~
{\bf B}-{\bf B}^*,~
\sqrt{2 \rho e}
\Big)^\top.
$$
The proof of \eqref{eq:needproof1A} is divided into the following two steps.
{\tt Step 1}. Reformulate $\hat \Pi_f$ into a quadratic form in the variables $\hat \theta_j, 1\le j \le 7$.
Unlike what is required in Lemma \ref{theo:MHD:LLFsplit}, one cannot
require that the coefficients of this quadratic form is independent on ${\bf v}^*$ and ${\bf B}^*$,
but can require those coefficients do not depend on
${ v}_2^*$, $v_3^*$ and ${\bf B}^*$.
Similar to the proof of Lemma \ref{theo:MHD:LLFsplit}, we technically arrange $\hat \Pi_f$ as
\begin{equation}\label{eq:3partsA}
\hat \Pi_f = \frac12 v_1^* |{\bf B}-{\bf B}^*|^2 +
+ \frac{\rho v_1 }2 |{\bf v}-{\bf v}^*|^2 + v_1 \rho e + p ( v_1 - v_1^* )
+ \sum_{j=2}^3 ( B_j (v_1-v_1^*)
- B_1( v_j -v_j^* ) ) ( B_j - B_j^* ).
\end{equation}
Then we immediately have
\begin{equation}\label{eq:B}
\hat \Pi_f= v_1^* \sum_{j=4}^6 \hat \theta_j^2 +
v_1 \Big( \sum_{j=1}^3 \hat \theta_j^2 + \hat \theta_7^2 \Big)
+ 2{\mathscr{C}}_s
\hat \theta_1 \hat \theta_7 + \frac{ 2 B_2} { \sqrt{\rho} } \hat \theta_1 \hat \theta_5 + \frac{ 2 B_3} { \sqrt{\rho} } \hat \theta_1 \hat \theta_6
- \frac{2B_1}{\sqrt{\rho}} (\hat \theta_2 \hat \theta_5 + \hat \theta_3 \hat \theta_6).
\end{equation}
{\tt Step 2}. Estimate the upper bound of $\frac{|\hat \Pi_f|}{\hat \Pi_u}$.
Note that
\begin{align*}
\hat \Pi_f & = v_1^* \sum_{j=4}^6 \hat \theta_j^2 + v_1 \Big( \sum_{j=1}^3 \hat \theta_j^2 + \hat \theta_7^2 \Big)
+ \hat {\bm \theta}^\top {\bf A}_7 \hat {\bm \theta} ,
\end{align*}
where
$$
{\bf A}_7 = \begin{pmatrix}
0 & 0 & 0 & 0 & B_2 \rho^{-\frac12} & B_3 \rho^{-\frac12} & {\mathscr{C}}_s \\
0 & 0 & 0 & 0 & -B_1 \rho^{-\frac12} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -B_1 \rho^{-\frac12} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
B_2 \rho^{-\frac12} & -B_1 \rho^{-\frac12} & 0 & 0 & 0 & 0 & 0 \\
B_3 \rho^{-\frac12} & 0 & -B_1 \rho^{-\frac12} & 0 & 0 & 0 & 0 \\
{\mathscr{C}}_s & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}.
$$
The spectral radius
of ${\bf A}_7$ is $ {\mathscr{C}}_1$. This gives the following estimate
\begin{equation} \label{eq:PI4A}
\begin{aligned}
|\hat \Pi_f| &
\le |v_1^*| \sum_{j=4}^6 \hat \theta_j^2
+
|v_1| \bigg( \sum_{j=1}^3
\hat \theta_{j}^2 + \hat \theta_7^2 \bigg)
+ | \hat{\bm \theta}^\top {\bf A}_7 \hat {\bm \theta} |
\\
&\le \max\{|v_1|,| v_1^*|\} |\hat {\bm \theta} |^2
+ {\mathscr{C}}_1 |\hat {\bm \theta} |^2
= \widehat \alpha_1 ({\bf U},v_1^*) |\hat{\bm \theta} |^2
= \widehat \alpha_1 ({\bf U},v_1^*) \hat \Pi_u.
\end{aligned}
\end{equation}
Therefore, the inequality \eqref{eq:needproof1A} holds. The proof is completed.
\end{proof}
\subsection{Deriving generalized LF splitting property by
Proposition \ref{lem:fornewGLF}}\label{sec:gLFnewproof}
\begin{proposition}\label{prop:n1DgLF}
The 1D generalized LF splitting property in Theorem \ref{theo:MHD:LLFsplit1D} holds for any $\alpha $
satisfying
$$
\alpha > \widetilde\alpha_1 (\hat{\bf U},\check{\bf U}),
$$
where $\widetilde\alpha_1$ is a different lower bound given by
$$
\widetilde\alpha_1 (\hat{\bf U},\check{\bf U})
:=
\max \{ \hat{\mathscr{C}}_1, \check{\mathscr{C}}_1
\} +
\max \{ |\hat v_1| + \hat{\mathscr{D}}_1, |\check v_1| + \check{\mathscr{D}}_1
\}
$$
with
$$\hat{\mathscr{D}}_1 := \frac{|\hat p_{ tot}- \hat B_1^2|}{
\hat \rho (A_1-\hat v_1 )},
\qquad
\check{\mathscr{D}}_1 := \frac{|\check p_{ tot}- \check B_1^2|}{
\check \rho (A_1+\check v_1 )},
\qquad
A_1 := \max\{ |\hat v_1|, |\check v_1| \}+ \max\{ \hat{\mathscr{C}}_1, \check{\mathscr{C}}_1 \}.
$$
For the ideal EOS \eqref{eq:EOS}, the property also holds for
any $\alpha > \widetilde {\widetilde \alpha}_1 (\hat{\bf U},\check{\bf U}) $, where
$$
\alpha > \widetilde {\widetilde \alpha}_1 (\hat{\bf U},\check{\bf U})
:=
\max \{ \hat{\mathcal{C}}_1, \check{\mathcal{C}}_1
\} +
\max \{ |\hat v_1| + \hat{\mathcal{D}}_1, |\check v_1| + \check{\mathcal{D}}_1
\}
$$
with
$$\hat{\mathcal{D}}_1 := \frac{|\hat p_{ tot}- \hat B_1^2|}{
\hat \rho ({\mathcal A}_1-\hat v_1 )},
\qquad
\check{\mathcal{D}}_1 := \frac{|\check p_{ tot}- \check B_1^2|}{
\check \rho ({\mathcal A}_1+\check v_1 )},
\qquad
{\mathcal A}_1 := \max\{ |\hat v_1|, |\check v_1| \}+ \max\{ \hat{\mathcal{C}}_1, \check{\mathcal{C}}_1 \}.
$$
\end{proposition}
\begin{proof}
Let $\overline \rho$, $\overline {\bf v}$
and $\overline {\bf B}$ respectively denote the density, velocity and magnetic field corresponding to
$\overline{\bf U}$.
Note that
$$\widetilde \alpha_1(\hat{\bf U},\check{\bf U}) >
\max\{ |\hat v_1|, |\check v_1| \},$$
which implies
$$
\overline \rho =
\frac12 \bigg( \hat \rho \Big(1-\frac{\hat v_1}{\alpha}\Big) + \check\rho \Big(1+\frac{\check v_1}{\alpha}\Big) \bigg) > 0,$$
for any $\alpha > \widetilde\alpha_1 (\hat{\bf U},\check{\bf U}) $.
Then it only needs to show ${\mathcal E} (\overline {\bf U}) >0$.
Define
$\overline{\bf n} = \big( \frac{|\overline{\bf v}|^2}2,~- \overline{\bf v},~-\overline{\bf B},~1 \big)^\top,$
then we can reformulate $2{\mathcal E} (\overline {\bf U}) $ as
\begin{align} \nonumber
2{\mathcal E} (\overline {\bf U})
& = 2\overline{\bf U} \cdot \overline{\bf n}
+ {|\overline{\bf B} |^2 }
= \bigg( \hat{\bf U} - \frac{ {\bf F}_1(\hat{\bf U})}{\alpha}
+
\check{\bf U} + \frac{ {\bf F}_1(\check{\bf U})}{\alpha} \bigg)
\cdot \overline{\bf n}
+ {|\overline{\bf B} |^2 }
\\ \nonumber
&= \bigg[\bigg( \hat{\bf U} - \frac{ {\bf F}_1(\hat{\bf U})}{\alpha}
\bigg) \cdot \overline{\bf n} + \frac{|\overline{\bf B}|^2}{2}
- \frac{ 1 }{\alpha} \bigg( \overline v_1 \frac{|\overline{\bf B}|^2}{2} - \hat B_1(\overline{\bf v} \cdot \overline{\bf B}) \bigg) \bigg]
\\ \nonumber
&\quad + \bigg[ \bigg( \check{\bf U} + \frac{ {\bf F}_1(\check{\bf U})}{\alpha}
\bigg) \cdot \overline{\bf n} + \frac{|\overline{\bf B}|^2}{2}
+ \frac{ 1 }{\alpha} \bigg( \overline v_1 \frac{|\overline{\bf B}|^2}{2} - \check B_1(\overline{\bf v} \cdot \overline{\bf B}) \bigg) \bigg]
\\ \label{eq:aaawu11}
&=: \Pi_1 + \Pi_2,
\end{align}
where the DDF condition \eqref{eq:descrite1DDIV} has been used.
We then use Proposition \ref{lem:fornewGLF} to prove $\Pi_i>0,i=1,2,$ by
verifying that
$$
\widetilde\alpha_1 (\hat{\bf U},\check{\bf U}) \ge
\widehat \alpha_1 (\hat{ \bf U }, \overline v_1),\qquad
\widetilde\alpha_1 (\hat{\bf U},\check{\bf U}) \ge
\widehat \alpha_1 (\check{ \bf U }, \overline v_1).
$$
It is sufficient to show that
$$
\widetilde\alpha_1 (\hat{\bf U},\check{\bf U}) \ge
\max\{ |\hat v_1|, |\overline v_1|, |\check v_1| \}
+ \max \{ \hat{\mathscr{C}}_1, \check{\mathscr{C}}_1
\},
$$
or equivalently
$$
\max \{ |\hat v_1| + \hat{\mathscr{D}}_1, |\check v_1| + \check{\mathscr{D}}_1
\} \ge \max\{ |\hat v_1|, |\overline v_1|, |\check v_1| \},
$$
which can be verified easily by noting that
$$
A_1 \le \widetilde\alpha_1 (\hat{\bf U},\check{\bf U}) < \alpha.
$$
Therefore, $\Pi_i>0,i=1,2,$ by Proposition \ref{lem:fornewGLF}.
It follows from \eqref{eq:aaawu11} that ${\mathcal E} (\overline {\bf U})>0$. Hence $\overline {\bf U} \in {\mathcal G}$ for any $\alpha > {\widetilde\alpha}_1 (\hat{\bf U},\check{\bf U})$.
In the ideal EOS case, by noting that ${\mathcal C}_1 > {\mathscr{C}}_1$,
similar arguments
imply $\overline {\bf U} \in {\mathcal G}$ for any $\alpha > \widetilde{\widetilde\alpha}_1 (\hat{\bf U},\check{\bf U})$.
The proof is completed.
\end{proof}
\begin{remark}
It is worth mentioning that
the estimated lower bounds $\widetilde \alpha_1$ and $\widetilde{ \widetilde \alpha}_1$ are not as sharp as the bound
$\alpha_1$ in Theorem \ref{theo:MHD:LLFsplit1D}.
Note that here the lower bound of $\alpha$ is said to be {\em sharper} if it is {\em smaller}, indicating that the resulting generalized
LF splitting properties hold for a {\em larger}
range of $\alpha$.
The {\em sharper} (i.e., {\em smaller}) lower bound is {\em more desirable}, because it corresponds to a {\em less dissipative} LF flux (allowing smaller numerical viscosity) in our provably PP schemes.
\end{remark}
\begin{remark}
Let define
$$ {\mathscr H }_i({\bf U}) :=| v_i| + \frac{| p_{ tot}- B_i^2|}{
\rho {\mathscr{C}}_i},\quad
{\mathcal H }_i({\bf U}) :=| v_i| + \frac{| p_{ tot}- \ B_i^2|}{
\rho {\mathcal{C}}_i}.
$$
Since $\widetilde\alpha_1$ and $\widetilde{\widetilde\alpha}_1$ in Proposition \ref{prop:n1DgLF} only play the role of lower range bounds,
they can be replaced with some simpler but larger (not sharp) ones,
e.g.,
\begin{align*}
&\widetilde\alpha_1 (\hat{\bf U},\check{\bf U})
\qquad
\longleftrightarrow
\qquad
\max \{ \hat{\mathscr{C}}_1, \check{\mathscr{C}}_1
\} +
\max \big\{ {\mathscr H }_1(\hat{\bf U}), {\mathscr H }_1(\check{\bf U})
\big\},
\\
&
\widetilde{ \widetilde \alpha}_1 (\hat{\bf U},\check{\bf U})
\qquad \longleftrightarrow \qquad
\max \{ \hat{\mathcal{C}}_1, \check{\mathcal{C}}_1
\} +
\max \big\{ {\mathcal H }_1(\hat{\bf U}), {\mathcal H }_1(\check{\bf U})
\big\}.
\end{align*}
\end{remark}
The 2D and 3D generalized LF splitting properties
can also be similarly derived by Proposition \ref{lem:fornewGLF}.
\section{3D Positivity-Preserving Analysis}\label{sec:3D}The extension of our PP analysis to 3D case
is straightforward, and for completeness, also given as follows.
We only present the main theorems, and omit the proofs,
which are very similar to the 2D case except for using the 3D generalized LF splitting property in Theorem \ref{theo:MHD:LLFsplit3D}.
To avoid confusing subscripts,
the symbols $({\tt x},{\tt y},{\tt z})$ are used to denote the variables $(x_1,x_2,x_3)$ in \eqref{eq:MHD}.
Assume that the 3D spatial domain is divided into a uniform cuboid mesh
with cells $\big\{I_{ijk}=({\tt x}_{i-\frac{1}{2}},{\tt x}_{i+\frac{1}{2}})\times
({\tt y}_{j-\frac{1}{2}},{\tt y}_{j+\frac{1}{2}})
\times
({\tt z}_{k-\frac{1}{2}},{\tt z}_{k+\frac{1}{2}}) \big\}$.
The spatial step-sizes in ${\tt x},{\tt y},{\tt z}$ directions are denoted by
$\Delta x,\Delta y,\Delta z$ respectively. The time interval is also divided into the mesh $\{t_0=0, t_{n+1}=t_n+\Delta t_{n}, n\geq 0\}$
with the time step size $\Delta t_{n}$ determined by the CFL condition. We use $\bar {\bf U}_{ijk}^n $
to denote the numerical approximation to the cell average of the exact solution over $I_{ijk}$ at time $t_n$.
\subsection{First-order scheme}
We consider the 3D first-order LF scheme
\begin{equation} \label{eq:3DMHD:LFscheme}
\begin{split}
\bar {\bf U}_{ijk}^{n+1} = \bar {\bf U}_{ijk}^n &- \frac{\Delta t_n}{\Delta x} \Big(
\hat {\bf F}_1 ( \bar {\bf U}_{ijk}^n ,\bar {\bf U}_{i+1,j,k}^n)
- \hat {\bf F}_1 ( \bar {\bf U}_{i-1,j,k}^n ,\bar {\bf U}_{ijk}^n) \Big)
\\
&- \frac{\Delta t_n}{\Delta y} \Big(
\hat {\bf F}_2 ( \bar {\bf U}_{ijk}^n ,\bar {\bf U}_{i,j+1,k}^n)
- \hat {\bf F}_2 ( \bar {\bf U}_{i,j-1,k}^n ,\bar {\bf U}_{ijk}^n)
\Big)
\\
&
- \frac{\Delta t_n}{\Delta z} \Big(
\hat {\bf F}_3 ( \bar {\bf U}_{ijk}^n ,\bar {\bf U}_{i,j,k+1}^n)
- \hat {\bf F}_3 ( \bar {\bf U}_{i,j,k-1}^n ,\bar {\bf U}_{ijk}^n)
\Big) ,
\end{split}
\end{equation}
where $\hat {\bf F}_\ell (\cdot,\cdot), \ell=1,2,3,$ are the LF fluxes in \eqref{eq:LFflux}.
We have the following conclusions.
\begin{theorem}\label{theo:3DcounterEx}
Let $\alpha_{\ell,n}^{\tt LF} = \chi \max_{ijk}{ {\mathscr{R}}_\ell (\bar{\bf U}^n_{ij})}$ with the
constant $\chi \ge 1$, and
$$\Delta t_n = \frac{ {\tt C} }{ \alpha_{1,n}^{\tt LF} /\Delta x + \alpha_{2,n}^{\tt LF} / \Delta y
+ \alpha_{3,n}^{\tt LF} / \Delta z}, $$
where ${\tt C}>0$ is the CFL number.
For any given constants $\chi$ and $\tt C$,
there always exists a set of admissible states
$\{ \bar {\bf U}_{ijk}^{n},\forall i,j,k\}$
such that the solution $\bar {\bf U}_{ijk}^{n+1}$
of \eqref{eq:3DMHD:LFscheme} does not belong to $\mathcal G$.
In other words, for any given $\chi$ and $\tt C$,
the admissibility of $\{ \bar {\bf U}_{ijk}^{n}, \forall i,j,k \}$ does not always guarantee that $\bar {\bf U}_{ijk}^{n+1} \in {\mathcal G}$, $\forall i,j,k$.
\end{theorem}
\begin{theorem} \label{theo:3DMHD:LFscheme}
If for all $i,j,k$, $\bar {\bf U}_{ijk}^n \in {\mathcal G}$ and satisfies the following DDF condition
\begin{equation}\label{eq:3DDisDivB}
\begin{split}
& \mbox{\rm div} _{ijk} \bar {\bf B}^n := \frac{ \left( \bar B_1\right)_{i+1,j,k}^n - \left( \bar B_1 \right)_{i-1,j,k}^n } {2\Delta x}
\\
&
+ \frac{ \left( \bar B_2 \right)_{i,j+1,k}^n - \left( \bar B_2 \right)_{i,j-1,k}^n } {2\Delta y}
+ \frac{ \left( \bar B_3 \right)_{i,j,k+1}^n - \left( \bar B_3 \right)_{i,j,k-1}^n } {2\Delta z} = 0,
\end{split}
\end{equation}
then the solution $ \bar {\bf U}_{ijk}^{n+1}$ of \eqref{eq:3DMHD:LFscheme} always belongs to ${\mathcal G}$ under the CFL condition
\begin{equation}\label{eq:CFL:LF3D}
0< \frac{ \alpha_{1,n}^{\tt LF} \Delta t_n}{\Delta x} + \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n}{\Delta y}
+ \frac{ \alpha_{3,n}^{\tt LF} \Delta t_n}{\Delta z} \le 1,
\end{equation}
where the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\begin{equation}\label{eq:Lxa123}
\begin{gathered}
\alpha_{1,n}^{\tt LF} > \max_{i,j,k} \alpha_1 ( \bar {\bf U}_{i+1,j,k}^n, \bar {\bf U}_{i-1,j,k}^n ),\quad
\alpha_{2,n}^{\tt LF} > \max_{i,j,k} \alpha_2 ( \bar {\bf U}_{i,j+1,k}^n, \bar {\bf U}_{i,j-1,k}^n ),
\\
\alpha_{3,n}^{\tt LF} > \max_{i,j,k} \alpha_3 ( \bar {\bf U}_{i,j,k+1}^n, \bar {\bf U}_{i,j,k-1}^n ).
\end{gathered}
\end{equation}
\end{theorem}
\begin{theorem} \label{theo:3DDivB:LFscheme}
For the LF scheme \eqref{eq:3DMHD:LFscheme},
the divergence error
$$ \varepsilon_{\infty}^n := \max_{ijk} \left| {\rm div}_{ijk} \bar {\bf B}^{n} \right| ,$$
does not grow with $n$
under the condition \eqref{eq:CFL:LF3D}.
Furthermore, the numerical solutions
$\{\bar {\bf U}_{ijk}^n\}$
satisfy \eqref{eq:3DDisDivB}
for all $i,j,k$ and $n \in \mathbb{N}$, if \eqref{eq:3DDisDivB} holds for the discrete initial data $\{\bar {\bf U}_{ijk}^0\}$.
\end{theorem}
\begin{theorem} \label{theo:3DFullPP:LFscheme}
Assume that the discrete initial data $\{\bar {\bf U}_{ijk}^0\}$ are admissible and satisfy \eqref{eq:3DDisDivB}, which can be met by, e.g., the following second-order approximation
\begin{align*}
&\Big( \bar \rho_{ijk}^0, \bar {\bf m}_{ijk}^0, \overline {(\rho e)}_{ijk}^0 \Big)
= \frac{1}{\Delta x \Delta y \Delta z} \iint_{I_{ijk}} \big( \rho,{\bf m }, \rho e \big) ({\tt x},{\tt y},{\tt z},0) d{\tt x} d{\tt y} d{\tt z},
\\
& \left( \bar B_1 \right)_{ijk}^0 = \frac{1}{4 \Delta y\Delta z} \int_{ {\tt y}_{j-1} }^{ {\tt y}_{j+1 } }
\int_{ {\tt z}_{k-1} }^{ {\tt z}_{k+1 } } B_1( {\tt x}_i,{\tt y},{\tt z},0) d{\tt y} d{\tt z},
\\
&
\left( \bar B_2 \right)_{ijk}^0 = \frac{1}{4 \Delta x \Delta z} \int_{ {\tt x}_{i-1} }^{ {\tt x}_{i+1 } }
\int_{ {\tt z}_{k-1} }^{ {\tt z}_{k+1 } } B_2({\tt x},{\tt y}_j,{\tt z},0) d {\tt x} d{\tt z},
\\
&
\left( \bar B_3 \right)_{ijk}^0 = \frac{1}{4 \Delta x \Delta y} \int_{ {\tt x}_{i-1} }^{ {\tt x}_{i+1 } }
\int_{ {\tt y}_{j-1} }^{ {\tt y}_{j+1 } } B_3({\tt x},{\tt y},{\tt z}_k,0) d {\tt x} d{\tt y},
\\
&~\bar E_{ijk}^0 = \overline {(\rho e )}_{ijk}^0 + \frac12 \left( \frac{ |\bar{\bf m}_{ijk}^0|^2}{\bar \rho_{ijk}^0} + |\bar{\bf B}_{ijk}^0|^2 \right).
\end{align*}
If the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy \eqref{eq:Lxa123}, then under the CFL condition \eqref{eq:CFL:LF3D},
the LF scheme \eqref{eq:3DMHD:LFscheme} preserve both $\bar {\bf U}_{ijk}^{n+1} \in {\mathcal G}$ and the DDF condition \eqref{eq:3DDisDivB} for all $i$, $j$, $k$, and $n \in \mathbb{N}$.
\end{theorem}
\subsection{High-order schemes}
We focus on the forward Euler method for time discretization, and our analysis also works for
high-order explicit time discretization using the SSP methods \cite{Gottlieb2009}. To achieve high-order accuracy,
the approximate solution polynomials ${\bf U}_{ijk}^n ({\tt x},{\tt y},{\tt z})$ of degree $\tt K$
are also built, as approximation to the exact solution ${\bf U}({\tt x},{\tt y},{\tt z},t_n)$ within $I_{ijk}$. Such polynomial vector ${\bf U}_{ijk}^n ({\tt x},{\tt y},{\tt z})$
is, either reconstructed in the finite volume methods
from the cell averages $\{\bar {\bf U}_{ijk}^n\}$ or evolved in the DG methods. Moreover, the cell average of ${\bf U}_{ijk}^n({\tt x},{\tt y},{\tt z})$ over $I_{ijk}$ is $\bar {\bf U}_{ijk}^{n}$.
Let $\{ {\tt x}_i^{(\mu)} \}_{\mu=1}^{\tt Q}$, $\{ {\tt y}_j^{(\mu)} \}_{\mu=1}^{\tt Q}$
and $\{ {\tt z}_j^{(\mu)} \}_{\mu=1}^{\tt Q}$
denote the $\tt Q$-point Gauss quadrature nodes in the intervals $[ {\tt x}_{i-\frac12}, {\tt x}_{i+\frac12} ]$, $[ {\tt y}_{j-\frac12}, {\tt y}_{j+\frac12} ]$
and $[ {\tt z}_{k-\frac12}, {\tt z}_{k+\frac12} ]$, respectively. Let $\{\omega_\mu\}_{\mu=1}^{\tt Q}$ be the associated weights satisfying
$\sum_{\mu=1}^{\tt Q} \omega_\mu = 1$.
With the 2D tensorized quadrature rule for approximating the integrals of numerical fluxes on cell interfaces,
a finite volume scheme or discrete equation for the cell average in the DG method can be written as
\begin{equation}\label{eq:3DMHD:cellaverage}
\begin{split}
\bar {\bf U}_{ijk}^{n+1} & = \bar {\bf U}_{ijk}^{n}
- \frac{\Delta t_n}{\Delta x} \sum\limits_{\mu,\nu} \omega_\mu
\omega_\nu \left(
\hat {\bf F}_1( {\bf U}^{-,\mu,\nu}_{i+\frac{1}{2},j,k}, {\bf U}^{+,\mu,\nu}_{i+\frac{1}{2},j,k} ) -
\hat {\bf F}_1( {\bf U}^{-,\mu,\nu}_{i-\frac{1}{2},j,k}, {\bf U}^{+,\mu,\nu}_{i-\frac{1}{2},j,k})
\right) \\
& \quad - \frac{ \Delta t_n }{\Delta y} \sum\limits_{\mu,\nu} \omega_\mu
\omega_\nu \left(
\hat {\bf F}_2( {\bf U}^{\mu,-,\nu}_{i,j+\frac{1}{2},k} , {\bf U}^{\mu,+,\nu}_{i,j+\frac{1}{2},k} ) -
\hat {\bf F}_2( {\bf U}^{\mu,-,\nu}_{i,j-\frac{1}{2},k} , {\bf U}^{\mu,+,\nu}_{i,j-\frac{1}{2},k} )
\right)
\\
& \quad - \frac{ \Delta t_n }{\Delta z} \sum\limits_{\mu,\nu} \omega_\mu
\omega_\nu \left(
\hat {\bf F}_3( {\bf U}^{\mu,\nu,-}_{i,j,k+\frac{1}{2}} , {\bf U}^{\mu,\nu,+}_{i,j,k+\frac{1}{2}} ) -
\hat {\bf F}_3( {\bf U}^{\mu,\nu,-}_{i,j,k-\frac{1}{2}} , {\bf U}^{\mu,\nu,+}_{i,j,k-\frac{1}{2}} )
\right),
\end{split}
\end{equation}
where $\hat {\bf F}_\ell,\ell=1,2,3$ are the LF fluxes in \eqref{eq:LFflux}, and the limiting values are given by
\begin{align*}
&{\bf U}^{-,\mu,\nu}_{i+\frac{1}{2},j,k} = {\bf U}_{ijk}^n ({\tt x}_{i+\frac12},{\tt y}_j^{(\mu)},{\tt z}_k^{(\nu)}),\qquad
{\bf U}^{+,\mu,\nu}_{i-\frac{1}{2},j,k} = {\bf U}_{ijk}^n ({\tt x}_{i-\frac12},{\tt y}_j^{(\mu)},{\tt z}_k^{(\nu)}),
\\
&{\bf U}^{\mu,-,\nu}_{i,j+\frac{1}{2},k} = {\bf U}_{ijk}^n ({\tt x}_i^{(\mu)},{\tt y}_{j+\frac12},{\tt z}_k^{(\nu)}),\qquad
{\bf U}^{\mu,+,\nu}_{i,j-\frac{1}{2},k} = {\bf U}_{ijk}^n ({\tt x}_i^{(\mu)},{\tt y}_{j-\frac12},{\tt z}_k^{(\nu)}),
\\
&{\bf U}^{\mu,\nu,-}_{i,j,k+\frac{1}{2}} = {\bf U}_{ijk}^n ({\tt x}_i^{(\mu)},{\tt y}_j^{(\nu)},{\tt z}_{k+\frac12}),\qquad
{\bf U}^{\mu,\nu,+}_{i,j,k-\frac{1}{2}} = {\bf U}_{ijk}^n ({\tt x}_i^{(\mu)},{\tt y}_j^{(\nu)},{\tt z}_{k-\frac12}).
\end{align*}
For the accuracy requirement, $\tt Q$ should satisfy:
${\tt Q} \ge {\tt K}+1$ for a $\mathbb{P}^{\tt K}$-based DG method,
or ${\tt Q} \ge ({\tt K}+1)/2$ for a $({\tt K}+1)$-th order finite volume scheme.
We denote
\begin{align*}
& \overline{(B_1)}_{i+\frac{1}{2},j,k}^{\mu,\nu} := \frac12 \left( (B_1)_{i+\frac{1}{2},j,k}^{-,\mu,\nu} + (B_1)_{i+\frac{1}{2},j,k}^{+,\mu,\nu} \right),
\\
& \overline{ ( B_2) }_{i,j+\frac{1}{2},k}^{\mu,\nu} := \frac12 \left( ( B_2)_{i,j+\frac{1}{2},k}^{\mu,-,\nu} + ( B_2)_{i,j+\frac{1}{2},k}^{\mu,+,\nu} \right),
\\
& \overline{ ( B_3) }_{i,j,k+\frac{1}{2}}^{\mu,\nu} := \frac12 \left( ( B_3)_{i,j,k+\frac{1}{2}}^{\mu,\nu,-} + ( B_3)_{i,j,k+\frac{1}{2}}^{\mu,\nu,+} \right),
\end{align*}
and
define the discrete divergences of the numerical magnetic field ${\bf B}^n( {\tt x},{\tt y},{\tt z})$ as
\begin{align*}
& {\rm div} _{ijk} {\bf B}^n := \frac{\sum\limits_{\mu,\nu} \omega_\mu
\omega_\nu \left( \overline{(B_1)}_{i+\frac{1}{2},j,k}^{\mu,\nu}
- \overline{(B_1)}_{i-\frac{1}{2},j,k}^{\mu,\nu} \right)}{\Delta x}
\\
&
+ \frac{\sum \limits_{\mu,\nu} \omega_\mu \omega_\nu \left( \overline{ ( B_2)}_{i,j+\frac{1}{2},k}^{\mu,\nu}
- \overline{ ( B_2)}_{i,j-\frac{1}{2},k}^{\mu,\nu} \right)}{\Delta y}
+ \frac{\sum \limits_{\mu,\nu} \omega_\mu \omega_\nu \left( \overline{ ( B_3)}_{i,j,k+\frac{1}{2}}^{\mu,\nu}
- \overline{ ( B_3)}_{i,j,k-\frac{1}{2}}^{\mu,\nu} \right)}{\Delta z}.
\end{align*}
Let $\{ \hat {\tt x}_i^{(\delta) }\}_{\delta=1} ^ {\tt L}$, $\{ \hat {\tt y}_j^{(\nu)} \}_{\delta=1} ^{\tt L}$
and $\{ \hat {\tt z}_k^{(\delta)} \}_{\delta=1} ^{\tt L}$
be the $\tt L$-point Gauss-Lobatto quadrature nodes in the intervals
$[{\tt x}_{i-\frac{1}{2}},{\tt x}_{i+\frac{1}{2}}]$, $[{\tt y}_{j-\frac{1}{2}},{\tt y}_{j+\frac{1}{2}} ]$
and $[{\tt z}_{k-\frac{1}{2}},{\tt z}_{k+\frac{1}{2}} ]$, respectively, and
$ \{\hat \omega_\delta\}_{\delta=1} ^ {\tt L}$ be associated weights satisfying $\sum_{\delta=1}^{\tt L} \hat\omega_\delta = 1$, where ${\tt L}\ge \frac{{\tt K}+3}2$ such that the
associated quadrature has algebraic precision of at least degree ${\tt K}$.
Then we have the following sufficient
conditions for that the high-order scheme \eqref{eq:3DMHD:cellaverage} is PP.
\begin{theorem} \label{thm:PP:3DMHD}
If the polynomial vectors $\{{\bf U}_{ijk}^n({\tt x},{\tt y},{\tt z})\}$ satisfy:
\begin{align}\label{eq:DivB:cst123}
&
\mbox{\rm div} _{ijk} {\bf B}^n = 0, \quad \forall i,j,k,
\\
&{\bf U}_{ijk}^n ({\bf x}) \in {\mathcal G},\quad \forall{\bf x} \in {\Theta}_{ijk}, \forall i,j,k,
\label{eq:3Dadmissiblity}
\end{align}
with
${\Theta}_{ijk}:=\big\{( \hat {\tt x}_i^{(\delta)},{\tt y}_j^{(\mu)},{\tt z}_k^{(\nu)} ),~( {\tt x}_i^{(\mu)}, \hat {\tt y}_j^{(\delta)},{\tt z}_k^{(\nu)} ),~( {\tt x}_i^{(\mu)},
{\tt y}_j^{(\nu)}, \hat{\tt z}_k^{(\delta)} ), \forall \mu,\nu,\delta \big\},$
then the scheme \eqref{eq:3DMHD:cellaverage} always preserves $\bar{\bf U}_{ijk}^{n+1} \in {\mathcal G}$ under the CFL condition
\begin{equation}\label{eq:CFL:3DMHD}
0< \frac{\alpha_{1,n}^{\tt LF} \Delta t_n}{\Delta x} + \frac{ \alpha_{2,n}^{\tt LF} \Delta t_n}{\Delta y}
+ \frac{ \alpha_{3,n}^{\tt LF} \Delta t_n}{\Delta z} \le \hat \omega_1,
\end{equation}
where the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\begin{equation}\label{eq:3DhighLFpara}
\begin{split}
&\alpha_{1,n}^{\tt LF} >
\max_{ i,j,k,\mu,\nu} \alpha_1 \big( { \bf U }_{i+\frac12,j,k}^{\pm,\mu,\nu} , { \bf U }_{i-\frac12,j,k}^{\pm,\mu,\nu} \big)
,\\
&\alpha_{2,n}^{\tt LF} > \max_{ i,j,k,\mu,\nu} \alpha_2 \big( { \bf U }_{i,j+\frac12,k}^{\mu,\pm,\nu} , { \bf U }_{i,j-\frac12,k}^{\mu,\pm,\nu} \big),
\\
&
\alpha_{3,n}^{\tt LF} > \max_{ i,j,k,\mu,\nu} \alpha_3 \big( { \bf U }_{i,j,k+\frac12}^{\mu,\nu,\pm} , { \bf U }_{i,j,k-\frac12}^{\mu,\nu,\pm} \big).
\end{split}
\end{equation}
\end{theorem}
Lower bound of the internal energy can also be estimated when the
DDF condition \eqref{eq:DivB:cst123} is not satisfied.
\begin{theorem} \label{thm:PP:3DMHD:further}
Assume the polynomial vectors $\{{\bf U}_{ijk}^n({\tt x},{\tt y},{\tt z})\}$ satisfy
\eqref{eq:3Dadmissiblity}, and the parameters $\{\alpha_{\ell,n}^{\tt LF}\}$ satisfy
\eqref{eq:3DhighLFpara}.
Then under the CFL condition \eqref{eq:CFL:3DMHD},
the solution $\bar{\bf U}_{ijk}^{n+1}$
of the scheme \eqref{eq:3DMHD:cellaverage} satisfies
that $\bar \rho_{ijk}^{n+1} > 0$,
and
\begin{equation*}
{\mathcal E} ( \bar{\bf U}_{ijk}^{n+1} ) > -\Delta t_n \big(\bar{\bf v}_{ijk}^{n+1} \cdot
\bar{\bf B}_{ijk}^{n+1} \big) {\rm div}_{ijk}{\bf B}^n,
\end{equation*}
where
$\bar{\bf v}_{ijk}^{n+1} :=\bar{\bf m}_{ijk}^{n+1}/\bar{\rho}^{n+1}_{ijk}$.
\end{theorem}
\bibliographystyle{siamplain}
\bibliography{references}
\end{document} | {"config": "arxiv", "file": "1802.02278/PPDGMHD12.tex"} |
TITLE: Given a person's age in years, what is the best way to estimate their age correctly in the future?
QUESTION [1 upvotes]: I have a database of people who have only given their age in years (and the date that they specified it). Because I want to keep this number accurate over time, I want to convert ages to birth years.
What is the best way to compute a birth date that will maximize the probability that on any given day, a user's age (calculated from the "birth date" that I make up) will be accurate?
Here's a example of the data I have available:
age | date_provided
-----+------------
21 | 2014-03-22 15:14:50.431278-05
31 | 2014-04-21 18:11:27.299024-05
27 | 2014-03-23 22:35:16.81655-05
25 | 2014-03-12 09:27:15.865215-05
30 | 2013-09-01 19:16:17.146388-05
37 | 2014-01-17 10:50:49.393871-06
26 | 2014-03-26 18:04:23.520413-05
30 | 2014-04-11 10:05:35.585068-05
30 | 2013-07-08 21:14:03.876834-05
26 | 2014-06-05 15:15:50.63014-05
16 | 2014-02-26 15:45:23.98677-06
26 | 2013-11-05 19:55:05.163587-06
18 | 2014-03-16 16:10:40.481958-05
29 | 2014-06-23 20:18:28.884308-05
21 | 2014-06-15 01:04:24.83778-05
30 | 2014-02-15 02:52:52.147953-06
27 | 2013-12-27 21:33:28.819328-06
31 | 2014-05-24 09:39:52.774244-05
28 | 2014-02-18 19:56:22.064-06
29 | 2014-01-25 16:03:23.894607-06
29 | 2014-02-12 13:10:36.682297-06
28 | 2014-01-16 08:32:38.51522-06
One option is to set their birthdate to date_provided - age, but that assumes that their birthday was the day they gave their age. One could also set all of their birthdays to the same day, but then it's likely that everyone's age will be wrong at least half of the year.
Thanks in advance!
Edit: For the sake of this question, assume that the ages provided are accurate.
REPLY [4 votes]: You can minimize the error (assuming the data you are given is accurate!?!) by setting birth date=date provided-age-$\frac 12$ year. That will be within $\frac 12$ year at all times. You are assuming a uniform distribution of age as [age provided,age provided+1) and taking the middle. | {"set_name": "stack_exchange", "score": 1, "question_id": 863265} |
\section{Experiment Details}
\label{app:experiments}
We tested the proposed class of optimizers on different tasks that have potentially different loss surfaces in large parameter spaces.
To cast a wide net and ensure that we capture a plethora of neural architectures, we vary the network designs by using fully-connected, recurrent \cite{rumelhart1985learning}, convolutional \cite{lecun1999object}, dropout \cite{srivastava2014dropout}, ReLU \cite{agarap2018deep} layers and a large Transformer language model architecture -- RoBERTa-Base \cite{liu2019roberta} across different tasks. A brief description of the data and architectures used follows.
CIFAR-10 is an image classification task with $10$ classes. We train a shallow Convolutional Neural Network (ConvNet) with dropout and ReLU activation using the different optimizers and their $_C$ variants.
CIFAR-$100$ is an image dataset similar in size to CIFAR-$10$ but with $100$ classes of images. This task was approached using a Convolutional Neural Network with three batch-normalized convolutional blocks and one fully-connected block with dropout applied between blocks.
We experiment with the commonly-used language inference dataset, Stanford Natural Language Inference (SNLI) dataset, to compare the performances of $_C$ variants training on three different text encoder architectures: $4$-layer convolutional network (ConvNetEncoder), $2$ layer bi-directional encoder with $2$ linear projection layers (InferSent) and a unidirectional LSTM with $2$ recurrent layers (LSTMEncoder). The classifier on top of the representations learned by the encoder architectures is a $2$-layer fully-connected Multi-Layer Perceptron with dropout connection.
The Penn Tree Bank (PTB) is a syntax-annotated text corpus sourced from stories from the Wall Street Journal. We use this corpus for a word-level language modeling task using a $1$-layer LSTM with dropout. Gradient clipping is employed to avoid exploding gradients. We evaluate the model by measuring perplexity (PPL) in the validation set; lower PPL scores are preferred.
MultiWoZ $2.0$ is a popular dataset that has human-to-human goal-oriented conversations on different topics. The objective is to generate the next utterance conditioned on the history of utterances in the conversation. We experiment with a very large language model architecture -- RoBERTa-Base -- and a BiLSTM Sequence-to-Sequence architecture \cite{vinyals2015neural} for the next utterance prediction. RoBERTa was trained with CausalLM-Head while BiLSTM model was an encoder-decoder architecture with a Bi-LSTM encoder and LSTM with Attention decoder.
We use models with varying size of trainable parameters as shown in Table \ref{tab:model-sizes}.
\begin{table}[htbp]
\centering
\scriptsize
\caption{We experimented with models with different sizes for a comprehensive study of the proposed $_C$ variants.}
\vspace{1em}
\begin{tabular}{|p{2cm}|p{1.5cm}|p{2.5cm}|}
\hline
\textbf{Model} & \textbf{\#Params} & \textbf{Dataset(s)} \tabularnewline
\hline
Logistic Regression & 8K (55/64) & MNIST (rcv1/covtype) \tabularnewline
NeuralNetwork & 25K & MNIST\tabularnewline
ConvNet & 600K (62K) & CIFAR-100 (CIFAR-10)\tabularnewline
LSTM & 20K (600K) & PTB (WikiText) \tabularnewline
LSTMEncoder & 600K & SNLI \tabularnewline
InferSent & 1.2M & SNLI \tabularnewline
ConvNetEncoder &3.1M & SNLI \tabularnewline
RoBERTa-Base & 125M & MultiWoZ \tabularnewline
Bi-LSTM & 574K & MultiWoZ \tabularnewline
\hline
\end{tabular}
\label{tab:model-sizes}
\end{table}
We use the same hyperparameter initialization for comparisons with base optimization methods and tune the learning rate hyperparameter using grid search. The primary objective of the experiments is to verify the consistency of convergence to better solutions across tasks and architectures. All reported results are averaged over $5$ different runs with different random seed values.
\subsection{Model Hparams}
\label{app:best-hparam}
All experiments are averaged for 5 different runs with different seeds. We use PyTorch 1.1 for the experiments and use their implementation of the base optimizers available in \texttt{torch.optim}. The details of hyperparameters used for the model are in Table \ref{tab:hparams-experiments}.
\begin{table}[htbp]
\centering
\scriptsize
\caption{Architecture details for models used in experiments.}
\label{tab:hparams-experiments}
\begin{tabular}{|l|l|c|c|c|}
\hline
\textbf{Model}& \textbf{Dataset} & \textbf{\#Layers}& \textbf{\#Hidden} & \textbf{ReLU/Dropout} \\
\hline
Log. Reg. & covtype & 1 & N/A & No/No\\
Log. Reg. & rc1 & 1 & N/A & No/No\\
Log. Reg. & MNIST & 1 & N/A & No/No\\
Log. Reg. & synthetic & 1 & N/A & No/No\\
MLP & MNIST & 2 & 32 & Yes/No\\
CNN & CIFAR-10 & 5 & 120 & Yes/Yes\\
CNN & CIFAR-100 & 9 & 4096 & Yes/Yes\\
LSTM & PTB & 1 & 128 & No/No\\
LSTM & WikiText & 1 & 128 & No/No\\
ConvNetEnc. & SNLI & 2 & 200 & Yes/Yes\\
LSTMEncoder & SNLI & 2 & 200 & No/Yes\\
InferSent & SNLI & 2 & 200& Yes/Yes\\
RoBERTa-Base & MultiWoZ & 12& 768 & Yes/Yes\\
Bi-LSTM Attn & MultiWoZ & 4 & 200 & No/No \\
\hline
\bottomrule
\end{tabular}
\end{table}
\subsection{Runtime Statistics}
\label{app:runtime}
We logged the approximate time for each epoch for different values of \topc across the different models. Although the results are populated from the experiments with Adam and its $_C$ variants, the results can be extended to the other optimizers and its variants. These times are reported in Table \ref{tab:time-per-epoch}.
\begin{table}[htbp]
\centering
\scriptsize
\caption{Approximate time taken for one epoch of training of the models in the public repositories on the different data sets. The time is clocked in minutes. The time per epoch increases linearly with increase in \topc over the (\textbf{B})ase method. Although for lower values of \topc the time per epoch is smaller, there is still room for improvement to bring down the time with smart update rules as discussed in \S \ref{sec:discussion}.}
\label{tab:time-per-epoch}
\begin{tabular}{|l|l|c|c|c|c|c|c|}
\hline
\textbf{Dataset}&\textbf{Model} & \textbf{B} &\textbf{C5} & \textbf{C10} & \textbf{C20} & \textbf{C20}& \textbf{C100} \\
\hline
\multirow{2}{*}{MNIST}&Log.Reg &0.3 &0.4 &0.5 &0.6 &0.9 &.5 \\
& Neural-Net & 0.3 & 0.5 & 0.6 & 0.8 & 1.4 &2.3 \\
PTB& LSTM& 0.08 & 0.4 & 0.6 & 0.9 & 2 & 6 \\
WikiText& LSTM &0.6 &1 &1.4 &2.6 &5 &10 \\
CIFAR-10& CNN& 0.5 & 0.9 & 1 & 1.5 & 3 & 5 \\
CIFAR-100& CNN& 0.75 & 1.5 & 2 & 4 & 6 & 12 \\
\multirow{3}{*}{SNLI}&LSTMEnc.& 1 & 3 & 6 & 10 & 25 & 45 \\
& InferSent& 2 & 4 & 9 & 20 &35 & 50 \\
& ConvNet& 1 & 4 & 8 & 20 & 40 & 65 \\
\multirow{2}{*}{MultiWoZ}&RoBERTa & 15 & 25 & 35 & 50 & NA & NA \\
&BiLSTM & 3 & 4 & 5 & 5 & 7 & 10 \\
\hline
\bottomrule
\end{tabular}
\end{table} | {"config": "arxiv", "file": "2106.10708/Appendix-Sections/ExperimentDetails.tex"} |
TITLE: Creating A Function For The Following:
QUESTION [0 upvotes]: In trying to manipulate a function arrived at in a prior post, I've arrived at this (as a goal):
$$\begin{array}{c|c}
x & y \\ \hline
0.3^{-1} & -1 \\
0.3^{-1}+0.3^{0} & 0 \\
0.3^{-1}+0.3^{0}+0.3^{1}& 1 \\
0.3^{-1}+0.3^{0}+0.3^{1}+0.3^{2} & 2 \\
0.3^{-1}+0.3^{0}+0.3^{1}+0.3^{2}+0.3^{3} & 3 \\
\end{array}$$
But, I'm having issues creating the right function. I'd be very thankful if someone could give me a hand with it!
REPLY [2 votes]: In your other post you were given the formula (where $a=0.3$)
$$x=\dfrac{1-a^{y+2}}{(1-a)a}$$
So isolating $y$ we get $\quad a^{y+2}=1-x(1-a)a\iff y+2=\log_a(1-xa+xa^2)$
The formula you seek is thus $$y=\dfrac{\ln(1-xa+xa^2)}{\ln(a)}-2$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 3378363} |
\begin{document}
\author{Prasanta Kumar Barik$^1$, {Ankik Kumar Giri$^1$}\footnote{Corresponding author. Tel +91-1332-284818 (O); Fax: +91-1332-273560 \newline{\it{${}$ \hspace{.3cm} Email address: }}[email protected]/[email protected]} and Philippe Lauren\c{c}ot$^2$\\
\footnotesize ${}^1$Department of Mathematics, Indian Institute of Technology Roorkee,\\ \small{ Roorkee-247667, Uttarakhand, India}\\
\footnotesize ${}^2$Institut de Mathématiques de Toulouse
UMR 5219, Université de Toulouse, CNRS \\ \small{ F-31062 Toulouse Cedex 9, France}
}
\title {Mass-conserving solutions to the Smoluchowski coagulation equation with singular kernel}
\maketitle
\begin{quote}
{\small {\em\bf Abstract.} Global weak solutions to the continuous Smoluchowski coagulation equation (SCE) are constructed for coagulation kernels featuring an algebraic singularity for small volumes and growing linearly for large volumes, thereby extending previous results obtained in Norris (1999) and Cueto Camejo \& Warnecke (2015). In particular, linear growth at infinity of the coagulation kernel is included and the initial condition may have an infinite second moment. Furthermore, all weak solutions (in a suitable sense) including the ones constructed herein are shown to be mass-conserving, a property which was proved in Norris (1999) under stronger assumptions. The existence proof relies on a weak compactness method in $L^1$ and a by-product of the analysis is that both conservative and non-conservative approximations to the SCE lead to weak solutions which are then mass-conserving.\enter
}
\end{quote}
{\bf Keywords:} Coagulation; Singular coagulation kernels; Existence; Mass-conserving solutions\\
{\bf MSC (2010):} Primary: 45J05, 45K05; Secondary: 34A34, 45G10.
\section{Introduction}\label{existintroduction1}
The kinetic process in which particles undergo changes in their physical properties is called a particulate process. The study of particulate processes is a well-known subject in various branches of engineering, astrophysics, physics, chemistry and in many other related areas. During the particulate process, particles merge to form larger particles or break up into smaller particles. Due to this process, particles change their size, shape and volume, to name but a few. There are various types of particulate processes such as coagulation, fragmentation, nucleation and growth for instance. In particular, this article mainly deals with the coagulation process which is governed by the Smoluchowski coagulation equation (SCE). In this process, two particles coalesce to form a larger particle at a particular instant.
The SCE is a nonlinear integral equation which describes the dynamics of evolution of the concentration $g(\zeta,t)$ of particles of volume $\zeta>0$ at time $t \geq 0$ \cite{Smoluchowski:1917}. The evolution of $g$ is given by
\begin{equation}\label{sce}
\frac{\partial g(\zeta,t)}{\partial t} = \mathcal{B}_c(g)({\zeta}, t) -\mathcal{D}_c(g)({\zeta}, t),\ \qquad (\zeta,t)\in (0,\infty)^2,\
\end{equation}
with initial condition
\begin{equation}\label{1in1}
g(\zeta,0) = g^{in}(\zeta)\geq 0,\ \qquad \zeta\in (0,\infty),\
\end{equation}
where the operator $\mathcal{B}_c$ and $\mathcal{D}_c$ are expressed as
\begin{equation}\label{Birthterm}
\mathcal{B}_c(g)({\zeta}, t):=\frac{1}{2} \int_{0}^{\zeta} \Psi (\zeta -\eta,\eta)g(\zeta -\eta,t)g(\eta,t)d\eta
\end{equation}
and
\begin{equation}\label{Deathterm}
\mathcal{D}_c(g)({\zeta}, t):=\int_{0}^{\infty} \Psi(\zeta,\eta)g(\zeta,t)g(\eta,t)d\eta.
\end{equation}
Here $\frac{\partial g(\zeta,t)}{\partial t}$ represents the time partial derivative of the concentration of particles of volume $\zeta$ at time $t$. In addition, the non-negative quantity $\Psi(\zeta, \eta)$ denotes the interaction rate at which particles of volume $\zeta$ and particles of volume $\eta$ coalesce to form larger particles. This rate is also known as the coagulation kernel or coagulation coefficient. The first and last terms $\mathcal{B}_c(g)$ and $\mathcal{D}_c(g)$ on the right-hand side to (\ref{sce}) represent the formation and disappearance of particles of volume $\zeta$ due to coagulation events, respectively.
Let us define the total mass (volume) of the system at time $t\ge 0$ as:
\begin{equation}\label{Totalmass}
\mathcal{M}_1(g)(t):=\int_0^{\infty}{\zeta} g(\zeta,t)d{\zeta}.
\end{equation}
According to the conservation of matter, it is well known that the total mass (volume) of particles is neither created nor destroyed. Therefore, it is expected that the total mass (volume) of the system remains conserved throughout the time evolution prescribed by \eqref{sce}--\eqref{1in1}, that is, $\mathcal{M}_1(g)(t)=\mathcal{M}_1(g^{in})$ for all $t\ge 0$. However, it is worth to mention that, for the multiplicative coagulation kernel $\Psi(\zeta, \eta)=\zeta \eta$, the total mass conservation fails for the SCE at finite time $t=1$, see \cite{Leyvraz:1981}. The physical interpretation is that the lost mass corresponds to \enquote{particles of infinite volume} created by a runaway growth in the system due to the very high rate of coalescence of very large particles. These particles, also referred to as \enquote{giant particles} \cite{Aldous:1999} are interpreted in the physics literature as a different macroscopic phase, called a \emph{gel}, and its occurrence is called the \emph{sol-gel transition} or \emph{gelation transition}. The earliest time $T_g \geq 0$ after which mass conservation no longer holds is called the \emph{gelling time} or \emph{gelation time}.
Since the works by Ball \& Carr \cite{Ball:1990} and Stewart \cite{Stewart:1989}, several articles have been devoted to the existence and uniqueness of solutions to the SCE for coagulation kernels which are bounded for small volumes and unbounded for large volumes, as well as to the mass conservation and gelation phenomenon, see \cite{Dubovskii:1996, Escobedo:2002, Escobedo:2003, Giri:2012, Laurencot:2002L, Norris:1999, Stewart:1990}, see also the survey papers \cite{Aldous:1999, Laurencot:2015, Mischler:2004} and the references therein. However, to the best of our knowledge, there are fewer articles in which existence and uniqueness of solutions to the SCE with singular coagulation rates have been studied, see \cite{Camejo:2015I, Camejo:2015II, Escobedo:2005, Escobedo:2006, Norris:1999}. In \cite{Norris:1999}, Norris investigates the existence and uniqueness of solutions to the SCE locally in time when the coagulation kernel satisfies
\begin{equation}
\Psi({\zeta}, {\eta}) \leq \phi({\zeta}) \phi({\eta}),\ \qquad (\zeta,\eta)\in (0,\infty)^2,\ \label{CK1}
\end{equation}
for some sublinear function $\phi: (0, \infty) \rightarrow [0, \infty)$, {that is, $\phi$ enjoys the property $\phi(a{\zeta})\leq a\phi({\zeta})$ for all $\zeta \in (0, \infty)$ and $a \geq 1$, and the initial condition $g^{in}$ belongs to $L^1((0,\infty);\phi(\zeta)^2d\zeta)$. Mass-conservation is also shown as soon as there is $\varepsilon>0$ such that $\phi(\zeta)\ge \varepsilon\zeta$ for all $\zeta\in (0,\infty)$. In \cite{Escobedo:2006, Escobedo:2005}, global existence, uniqueness, and mass-conservation are established for coagulation rates of the form $\Psi({\zeta}, {\eta}) = {\zeta}^{\mu_1}{\eta}^{\mu_2}+{\zeta}^{\mu_2}{\eta}^{\mu_1}$ with $-1\leq \mu_1 \leq \mu_2 \leq 1$, $\mu_1+\mu_2 \in[0, 2]$, and $(\mu_1, \mu_2)\neq (0, 1)$.} Recently, global existence of weak solutions to the SCE for coagulation kernels satisfying
\begin{equation*}
\Psi({\zeta}, {\eta}) \leq k^{*}(1+{\zeta}+{\eta})^{\lambda}({\zeta}{\eta})^{-\sigma},\ \qquad (\zeta,\eta)\in (0,\infty)^2,\
\end{equation*}
with $\sigma \in [0, 1/2]$, $\lambda -\sigma \in [0,1)$, and $k^{*}>0$, is obtained in \cite{Camejo:2015I} and further extended in \cite{Camejo:2015II} to the broader class of coagulation kernels
\begin{equation}
\Psi({\zeta}, {\eta}) \leq k^{*} (1+{\zeta})^{\lambda}(1+{\eta})^{\lambda}({\zeta}{\eta})^{-\sigma},\ \qquad (\zeta,\eta)\in (0,\infty)^2,\ \label{CK2}
\end{equation}
with $\sigma \geq 0$, $\lambda -\sigma \in [0,1)$, and $k^{*}>0$. In \cite{Camejo:2015II}, multiple fragmentation is also included and uniqueness is shown for the following restricted class of coagulation kernels
\begin{equation*}
\Psi_2({\zeta}, {\eta}) \leq k^{*}({\zeta}^{-\sigma}+{\zeta}^{\lambda-\sigma}) ({\eta}^{-\sigma}+{\eta}^{\lambda-\sigma}),\ \qquad (\zeta,\eta)\in (0,\infty)^2,\
\end{equation*}
where $\sigma \geq 0$ and $\lambda -\sigma \in [0, 1/2]$.
The main aim of this article is to extend and complete the previous results in two directions. We actually consider coagulation kernels satisfying the growth condition \eqref{CK1} for the non-negative function
\begin{equation*}
\phi_\beta(\zeta) := \max\left\{ \zeta^{-\beta}, \zeta \right\},\ \qquad \zeta\in (0,\infty),\
\end{equation*}
and prove the existence of a global mass-conserving solution of the SCE (\ref{sce})--(\ref{1in1}) with initial conditions in $L^1((0,\infty);(\zeta^{-2\beta}+\zeta)d\zeta)$, thereby removing the finiteness of the second moment required to apply the existence result of \cite{Norris:1999} and relaxing the assumption $\lambda<\sigma+1$ used in \cite{Camejo:2015II} for coagulation kernels satisfying \eqref{CK2}. Besides this, we show that any weak solution in the sense of Definition~\ref{definition} below is mass-conserving, a feature which was enjoyed by the solution constructed in \cite{Norris:1999} but not investigated in \cite{Camejo:2015I, Camejo:2015II}. An important consequence of this property is that it gives some flexibility in the choice of the method to construct a weak solution to the SCE (\ref{sce})--(\ref{1in1}) since it will be mass-conserving whatever the approach. Recall that there are two different approximations of the SCE \eqref{sce} by truncation have been employed in recent years, the so-called conservative and non-conservative approximations, see \eqref{Nonconservativedeath} below. While it is expected and actually verified in several papers that the conservative approximation leads to a mass-conserving solution to the SCE, a similar conclusion is not awaited when using the non-conservative approximation which has rather been designed to study the gelation phenomenon, in particular from a numerical point of view \cite{Filbet:2004II, Bourgade:2008}. Still, it is by now known that, for the SCE with locally bounded coagulation kernels growing at most linearly at infinity, the non-conservative approximation also allows one to construct mass-conserving solutions \cite{Filbet:2004I, Barik:2017}. The last outcome of our analysis is that, in our case, the conservative and non-conservative approximations can be handled simultaneously and both lead to a weak solution to the SCE which might not be the same due to the lack of a general uniqueness result but is mass-conserving.
We now outline the results of the paper: In the next section, we state precisely our hypotheses on coagulation kernel and on the initial data together with the definition of solutions and the main result. In Section 3, all weak solutions are shown to be mass-conserving. Finally, in the last section, the existence of a weak solution to the SCE \eqref{sce}--\eqref{1in1} is obtained by using a weak $L^1$ compactness method applied to either the non-conservative or the conservative approximations of the SCE.
\section{Main result}
We assume that the coagulation kernel $\Psi$ satisfies the following hypotheses.
\begin{hypp}\label{hyppmcs}
(H1) $\Psi$ is a non-negative measurable function on $(0,\infty) \times (0,\infty)$,
\\
(H2) There are $\beta>0$ and $k>0$ such that
\begin{equation*}
\begin{array}{ll}
0 \le \Psi(\zeta,\eta) = \Psi(\eta,\zeta) \le k (\zeta\eta)^{-\beta},\ & (\zeta,\eta)\in (0,1)^2, \\
0 \le \Psi(\zeta,\eta) = \Psi(\eta,\zeta) \le k \eta \zeta^{-\beta},\ & (\zeta,\eta)\in (0,1)\times (1,\infty), \\
0 \le \Psi(\zeta,\eta) = \Psi(\eta,\zeta) \le k (\zeta+\eta),\ & (\zeta,\eta)\in (1,\infty)^2.
\end{array}
\end{equation*}
Observe that (H2) implies that
\begin{equation*}
\Psi(\zeta,\eta) \le k \max\left\{ \zeta^{-\beta} , \zeta \right\} \max\left\{ \eta^{-\beta} , \eta \right\},\ \qquad (\zeta,\eta)\in (0,\infty)^2.
\end{equation*}
\end{hypp}
Let us now mention the following interesting singular coagulation kernels satisfying hypotheses~\ref{hyppmcs}.
\begin{itemize}
\item[(a)] Smoluchowski's coagulation kernel \cite{Smoluchowski:1917} (with $\beta=1/3$)
\begin{equation*}
\Psi(\zeta,\eta)= \left( \zeta^{1/3} + \eta^{1/3} \right) \left( \zeta^{-1/3} + \eta^{-1/3} \right),\ \qquad (\zeta,\eta)\in (0,\infty)^2.
\end{equation*}
\item[(b)] Granulation kernel \cite{Kapur:1972}
\begin{equation*}
\Psi(\zeta, \eta) = \frac{(\zeta +\eta)^{\theta_1}}{({\zeta}{\eta})^{\theta_2}},~~ \mbox{where}~~ {\theta_1} \leq 1 \ \text{and}\ {\theta_2} \geq 0.
\end{equation*}
\item[(c)] Stochastic stirred froths \cite{Clark:1999}
\begin{equation*}
\Psi(\zeta, \eta) = (\zeta\eta)^{-\beta},~~ \mbox{where}~~ \beta > 0.
\end{equation*}
\end{itemize}
Before providing the statement of Theorem~\ref{thm1}, we recall the following definition of weak solutions to the SCE (\ref{sce})--(\ref{1in1}). We set $L^1_{-2\beta, 1}(0, \infty):=L^1((0, \infty ) ; ({\zeta}^{-2\beta}+\zeta)d{\zeta})$.
\begin{definition}\label{definition}
Let $T\in (0,\infty]$ and $g^{in}\in L_{-2\beta,1}^1(0,\infty)$, $g^{in}\ge 0$ a.e. in $(0,\infty)$. A non-negative real valued function $g=g({\zeta},t)$ is a weak solution to equations (\ref{sce})--(\ref{1in1}) on $[0,T)$ if $g \in \mathcal{C}({[0,T)};L^1(0, \infty))\bigcap L^{\infty}(0,T;L^1_{-2\beta, 1}(0, \infty))$
and satisfies
\begin{align}\label{wsce}
\int_0^{\infty}[ g({\zeta},t) - g^{in}({\zeta})]&\omega({\zeta})d{\zeta}=\frac{1}{2}\int_0^t \int_0^{\infty} \int_{0}^{\infty}\tilde{\omega}({\zeta},{\eta}) \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds,
\end{align}
for every $t\in (0,T)$ and $\omega \in L^{\infty}(0, \infty)$, where
\begin{equation*}
\tilde{\omega} ({\zeta},{\eta}):=\omega({\zeta}+{\eta})-\omega ({\zeta})-\omega({\eta}), \qquad (\zeta,\eta)\in (0,\infty)^2.
\end{equation*}
\end{definition}
Now, we are in a position to state the main theorem of this paper.
\begin{thm}\label{thm1}
Assume that the coagulation kernel satisfies \textbf{hypotheses}~$(H1)$--$(H2)$ and consider a non-negative initial condition $g^{in}\in L^1_{-2\beta, 1}(0, \infty)$. There exists at least one mass-conserving weak solution $g$ to the SCE (\ref{sce})--(\ref{1in1}) on $[0,\infty)$, that is, $g$ is a weak solution to (\ref{sce})--(\ref{1in1}) in the sense of Definition~\ref{definition} satisfying $\mathcal{M}_1(g)(t) = \mathcal{M}_1(g^{in})$ for all $t\ge 0$, the total mass $\mathcal{M}_1(g)$ being defined in \eqref{Totalmass}.
\end{thm}
\section{Weak solutions are mass-conserving}
In this section, we establish that any weak solution $g$ to (\ref{sce})--(\ref{1in1}) on $[0,T)$, $T\in (0,\infty]$, in the sense of Definition~\ref{definition} is mass-conserving, that is, satisfies
\begin{equation}
\mathcal{M}_1(g)(t) = \mathcal{M}_1(g^{in}),\ \qquad t\ge 0. \label{PhL0}
\end{equation}
To this end, we adapt an argument designed in \cite[Section~3]{Ball:1990} to investigate the same issue for the discrete coagulation-fragmentation equations and show that the behaviour of $g$ for small volumes required in Definition~\ref{definition} allows us to control the possible singularity of $\Psi$.
\begin{thm}\label{themcs}
Suppose that $(H1)$--${(H2)}$ hold. Let $g$ be a weak solution to (\ref{sce})--(\ref{1in1}) on $[0, T)$ for some $T\in (0,\infty]$. Then $g$ satisfies the mass-conserving property \eqref{PhL0} for all $t\in (0,T)$.
\end{thm}
In order to prove Theorem~\ref{themcs}, we need the following sequence of lemmas.
\begin{lem}\label{lemma1}
Assume that $(H1)$--{$(H2)$} hold. Let $g$ be a weak solution to (\ref{sce})--(\ref{1in1}) on $[0, T)$. Then, for $q\in (0,\infty)$ and $t\in (0,T)$,
\begin{align}\label{lemma11}
\int_0^q \zeta g(\zeta, t) d\zeta -\int_0^q \zeta g^{in}(\zeta ) d\zeta = -\int_{0}^{t} \int_0^q \int_{q- \zeta}^{\infty} \zeta \Psi(\zeta, \eta)g(\zeta, s) g(\eta, s) d\eta d\zeta ds.
\end{align}
\end{lem}
\begin{proof}
Set $\omega (\zeta)=\zeta \chi_{(0, q)}(\zeta) $ {for $\zeta\in (0,\infty)$ and note that}
\begin{equation*}
\tilde{ \omega}(\zeta, \eta)=\begin{cases}
0,\ & \text{if}\ \zeta+\eta \in (0,q),\ \\
-(\zeta+\eta),\ & \text{if}\ \zeta+\eta \geq q,\ (\zeta,\eta)\in (0,q)^2,\ \\
-\zeta, \ & \text{if}\ (\zeta,\eta)\in (0,q)\times [q,\infty),\ \\
- \eta, \ & \text{if}\ (\zeta,\eta) \in [q,\infty) \times (0,q),\ \\
0,\ & \text{if}\ (\zeta,\eta)\in [q,\infty)^2.
\end{cases}
\end{equation*}
Inserting the above values of $\tilde{\omega}$ into \eqref{wsce} and using the symmetry of $\Psi$, we have
\begin{align*}
\int_0^{q}[ g({\zeta},t) - g^{in}({\zeta})] {\zeta} d{\zeta}=&\frac{1}{2}\int_0^t \int_0^{\infty} \int_{0}^{\infty}\tilde{\omega}({\zeta},{\eta}) \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds\nonumber\\
=&-\frac{1}{2}\int_0^t \int_0^{q} \int_{q-\zeta}^{q} (\zeta +\eta) \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds\nonumber\\
&-\frac{1}{2}\int_0^t \int_0^{q} \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds\nonumber\\
&-\frac{1}{2}\int_0^t \int_q^{\infty} \int_{0}^{q} \eta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds\\
=&{-\int_0^t\int_0^q\int_{q-\zeta}^q \zeta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds}\nonumber\\
&{-\int_0^t\int_0^q \int_q^\infty \zeta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta}ds},
\end{align*}
which completes the proof of Lemma~\ref{lemma1}.
\end{proof}
In order to complete the proof of Theorem~\ref{themcs}, it is sufficient to show that the right-hand side of (\ref{lemma11}) goes to zero as $q \to \infty$. The first step in that direction is the following result.
\begin{lem}\label{lemma2}
Assume that $(H1)$--{$(H2)$} hold. Let $g$ be a solution to (\ref{sce})--(\ref{1in1}) on $[0, T)$ and consider $t\in (0,T)$. Then
\begin{align*}
(i)\int_q^{\infty}[g(\zeta, t)-g^{in}(\zeta)]d\zeta=&- \frac{1}{2} \int_{0}^{t} \int_q^{\infty}\int_q^{\infty} \Psi(\zeta, \eta)g(\zeta, s) g(\eta, s) d\eta d\zeta ds\\
&+ \frac{1}{2} \int_0^t \int_0^q \int_{q-\zeta}^{q} \Psi(\zeta, \eta)g(\zeta, s) g(\eta, s) d\eta d\zeta ds,
\end{align*}
\begin{align*}
(ii)\lim_{q \to \infty} \int_{0}^{t} q \bigg[ \int_0^q \int_{q-\zeta}^{q} \Psi(\zeta, \eta)g(\zeta, s) g(\eta, s) d\eta d\zeta- \int_q^{\infty}\int_q^{\infty} \Psi(\zeta, \eta)g(\zeta, s) g(\eta, s) d\eta d\zeta \bigg]ds=0.
\end{align*}
\end{lem}
\begin{proof}
Set $\omega(\zeta) =\chi_{[q, \infty)}(\zeta) $ for $\zeta\in (0,\infty)$ and the corresponding $\tilde{ \omega}$ is
\begin{equation*}
\tilde{ \omega}(\zeta, \eta)=\begin{cases}
0,\ & \text{if}\ \zeta+\eta \in (0,q),\\
1,\ & \text{if}\ \zeta+\eta \in [q,\infty),\ (\zeta,\eta)\in (0,q)^2,\\
0, \ & \text{if}\ (\zeta,\eta) \in (0,q)\times [q,\infty),\\
0, \ & \text{if}\ (\zeta,\eta) \in [q,\infty) \times (0,q),\\
-1,\ & \text{if}\ (\zeta,\eta) \in [q,\infty)^2.\\
\end{cases}
\end{equation*}
Inserting the above values of $\tilde{ \omega}$ into (\ref{wsce}), we obtain Lemma~\ref{lemma2} $(i)$.\\
Next, we readily infer from the integrability of $\zeta\mapsto \zeta g(\zeta,t)$ and $\zeta\mapsto \zeta g^{in}(\zeta)$ and Lebesgue's dominated convergence theorem that
\begin{equation*}
\lim_{q\to\infty} q \int_q^\infty [g(\zeta, t)-g^{in}(\zeta)]d\zeta \le \lim_{q\to\infty} \int_q^\infty \zeta [g(\zeta, t)+ g^{in}(\zeta)]d\zeta = 0.
\end{equation*}
Multiplying the identity stated in Lemma~\ref{lemma2}~$(i)$ by $q$, we deduce from the previous statement that the left-hand side of the thus obtained identity converges to zero as $q\to\infty$. Then so does its right-hand side, which proves Lemma~\ref{lemma2}~$(ii)$.
\end{proof}
\begin{lem}\label{lemma4}
Assume that $(H1)$--$(H2)$ hold. Let $g$ be a weak solution to (\ref{sce})--(\ref{1in1}) on $[0,T)$. Then, for $t\in (0,T)$,
\begin{align*}
(i)\lim_{q \to \infty}\int_0^t \int_0^q \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta} ds=0,
\end{align*}
and
\begin{align*}
(ii)\lim_{q \to \infty} q \int_0^t \int_q^{\infty} \int_{q}^{\infty} \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta} ds=0.
\end{align*}
\end{lem}
\begin{proof}
Let $q>1$, $t\in (0,T)$, and $s\in (0,t)$. To prove the first part of Lemma~\ref{lemma4}, we split the integral as follows
\begin{equation*}
\int_0^q \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} = J_1(q,s) + J_2(q,s),
\end{equation*}
with
\begin{align*}
J_1(q,s) & := \int_0^1 \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta}, \\
J_2(q,s) & := \int_1^q \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta}.
\end{align*}
On the one hand, it follows from $(H2)$ and Young's inequality that
\begin{align*}
J_1(q,s) & \le k \int_0^1 \int_{q}^{\infty} \zeta^{1-\beta} \eta g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} \\
& \le k \left( \int_0^\infty \zeta^{1-\beta} g(\zeta,s) d\zeta \right) \left( \int_q^\infty \eta g(\eta,s) d\eta \right) \\
& \le k \|g(s)\|_{L_{-2\beta,1}^1(0,\infty)} \int_q^\infty \eta g(\eta,s) d\eta
\end{align*}
and the integrability properties of $g$ from Definition~\ref{definition} and Lebesgue's dominated convergence theorem entail that
\begin{equation}
\lim_{q \to \infty}\int_0^t J_1(q,s) ds = 0. \label{PhLz1}
\end{equation}
On the other hand, we infer from (H2) that
\begin{align*}
J_2(q,s) & \le k \int_1^q \int_{q}^{\infty} \zeta (\zeta+\eta) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} \\
& \le 2k \int_1^q \int_{q}^{\infty} \zeta \eta g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} \\
& \le 2k \mathcal{M}_1(g)(s) \int_q^\infty \eta g(\eta,s) d\eta,
\end{align*}
and we argue as above to conclude that
\begin{equation*}
\lim_{q \to \infty}\int_0^t J_2(q,s) ds = 0.
\end{equation*}
Recalling \eqref{PhLz1}, we have proved Lemma~\ref{lemma4}~$(i)$.\\
Similarly, by $(H2)$,
\begin{align*}
q \int_q^{\infty} \int_{q}^{\infty} \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} & \le k \int_q^\infty \int_q^\infty (q\zeta+q\eta) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} \\
& \le 2k \int_q^\infty \int_{q}^{\infty} \zeta \eta g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} \\
& \le 2k \mathcal{M}_1(g)(s)\int_q^\infty \eta g(\eta,s) d\eta,
\end{align*}
and we use once more the previous argument to obtain Lemma~\ref{lemma4}~$(ii)$.
\end{proof}
Now, we are in a position to prove Theorem~\ref{themcs}.
\begin{proof}[Proof of Theorem~\ref{themcs}] Let $t\in (0,T)$. From Lemma~\ref{lemma4}~$(i)$, we obtain
\begin{equation}\label{mce10}
\lim_{q \to \infty}\int_0^t \int_0^q \int_{q}^{\infty}\zeta \Psi({\zeta},{\eta})g({\zeta},s)g({\eta},s)d{\eta}d{\zeta} ds=0,
\end{equation}
while Lemma~\ref{lemma2}~$(ii)$ and Lemma~\ref{lemma4}~$(ii)$ imply that
\begin{equation}\label{mce12}
\lim_{q \to \infty} q \int_0^t \int_0^{q} \int_{q-\zeta}^{q} \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} ds=0.
\end{equation}
Since
\begin{align*}
\int_0^t \int_0^{q} \int_{q-\zeta}^{\infty} \zeta \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} ds & \le q \int_0^t \int_0^{q} \int_{q-\zeta}^{q} \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} ds \\
& + \int_0^t \int_0^{q} \int_{q}^{\infty} \zeta \Psi({\zeta},{\eta}) g({\zeta},s) g({\eta},s) d{\eta} d{\zeta} ds,
\end{align*}
it readily follows from \eqref{mce10} and \eqref{mce12} that the right-hand side of \eqref{lemma11} converges to zero as $q\to\infty$. Consequently,
\begin{equation*}
\mathcal{M}_1(g)(t) = \lim_{q\to\infty} \int_0^q \zeta g(\zeta,s) d\zeta = \lim_{q\to\infty} \int_0^q \zeta g^{in}(\zeta) d\zeta = \mathcal{M}_1(g^{in}).
\end{equation*}
This completes the proof of Theorem~\ref{themcs}.
\end{proof}
\section{Existence of weak solutions}
This section is devoted to the construction of weak solutions to the SCE \eqref{sce}--\eqref{1in1} with a non-negative initial condition $g^{in}\in L_{-2\beta,1}^1(0,\infty)$. It is achieved by a classical compactness technique, the appropriate functional setting being here the space $L^1(0,\infty)$ endowed with its weak topology first used in the seminal work \cite{Stewart:1989} and subsequently further developed in \cite{Barik:2017, Camejo:2015I, Camejo:2015II, Escobedo:2003, Filbet:2004I, Giri:2012, Laurencot:2002L}.
Given a non-negative initial condition $g^{in}\in L_{-2\beta,1}^1(0,\infty)$, the starting point of this approach is the choice of an approximation of the SCE \eqref{sce}--\eqref{1in1}, which we set here to be
\begin{equation}\label{tncfe}
\frac{\partial g_n({\zeta},t)} {\partial t} = \mathcal{B}_c(g_n)(\zeta,t) - \mathcal{D}_{c,n}^{\theta}(g_n)(\zeta,t),\ \qquad (\zeta,t)\in (0,n)\times (0, \infty),
\end{equation}
with truncated initial condition
\begin{equation}\label{nctp1a}
g_n(\zeta,0) = g_n^{in}({\zeta}):=g^{in}({\zeta})\chi_{(0,n)}({\zeta}), \qquad \zeta\in (0,n),
\end{equation}
where $n\ge 1$ is a positive integer, $\theta\in \{0,1\}$,
\begin{equation}
\Psi_n^\theta(\zeta,\eta) := \Psi(\zeta,\eta) \chi_{(1/n,n)}(\zeta) \chi_{(1/n,n)}(\eta) \left[ 1 - \theta + \theta \chi_{(0,n)}(\zeta+\eta) \right] \label{nctp1b}
\end{equation}
for $(\zeta,\eta)\in (0,\infty)^2$ and
\begin{equation}\label{Nonconservativedeath}
\mathcal{D}_{c,n}^{\theta}(g)(\zeta) := \int_{0}^{n-\theta\zeta} \Psi_n^\theta({\zeta},{\eta}) g({\zeta}) g ({\eta}) d{\eta},\ \qquad \zeta\in (0,n),\
\end{equation}
the gain term $\mathcal{B}_c(g)(\zeta)$ being still defined by \eqref{Birthterm} for $\zeta\in (0,n)$. The introduction of the additional parameter $\theta\in\{0,1\}$ allows us to handle simultaneously the so-called conservative approximation ($\theta=1$) and non-conservative approximation ($\theta=0$) and thereby prove that both approximations allow us to construct weak solutions to the SCE \eqref{sce}--\eqref{1in1}, a feature which is of interest when no general uniqueness result is available. Note that we also truncate the coagulation for small volumes to guarantee the boundedness of $\Psi_n^\theta$ which is a straightforward consequence of $(H2)$ and \eqref{nctp1b}. Thanks to this property, it follows from \cite{Stewart:1989} ($\theta=1$) and \cite{Filbet:2004I} ($\theta=0$) that there is a unique non-negative solution $g_n\in \mathcal{C}^1([0,\infty);L^1(0,n))$ to \eqref{tncfe}--\eqref{nctp1a} (we do not indicate the dependence upon $\theta$ for notational simplicity) which satisfies
\begin{equation}
\int_0^n{ {\zeta}g_n({\zeta},t)}d{\zeta} =\int_0^n{ {\zeta}g^{in}_n({\zeta})}d{\zeta} - (1-\theta) \int_0^t \int_0^n \int_{n-{\zeta}}^n {\zeta}\Psi_n^\theta(\zeta,\eta)g_n({\zeta},s)g_n({\eta},s)d\eta d{\zeta}ds \label{PhLz2}
\end{equation}
for $t\ge 0$. The second term in the right-hand side of \eqref{PhLz2} vanishes for $\theta=1$ and the total mass of $g_n$ remains constant throughout time evolution, which is the reason for this approximation to be called conservative. In contrast, when $\theta=0$, the total mass of $g_n$ decreases as a function of time. In both cases, it readily follows from \eqref{PhLz2} that
\begin{equation}\label{masstn}
\int_0^n{ {\zeta}g_n({\zeta},t)}d{\zeta} \leq \int_0^n{ {\zeta}g_n^{in}({\zeta})}d{\zeta} \le \mathcal{M}_1(g^{in}), \qquad t\ge 0.
\end{equation}
For further use, we next state the weak formulation of (\ref{tncfe})--(\ref{nctp1a}): for $t>0$ and $\omega \in L^{\infty}(0, n)$, there holds
\begin{equation}\label{nctp3}
\int_0^n \omega({\zeta})[g_n({\zeta},t)-g_n^{in}({\zeta})]d{\zeta}= \frac{1}{2} \int_0^t \int_{1/n}^n\int_{1/n}^{n} H_{\omega,n}^\theta({\zeta},{\eta})\Psi_n^{\theta}(\zeta,\eta) g_n({\zeta},s)g_n({\eta},s)d\eta d{\zeta}ds,
\end{equation}
where
\begin{equation*}
H_{\omega,n}^\theta({\zeta},{\eta}) := \omega ({\zeta}+{\eta})\chi_{(0,n)}({\zeta}+{\eta})- [\omega({\zeta})+\omega({\eta})] \left(1-\theta + \theta \chi_{(0,n)}(\zeta+\eta) \right)
\end{equation*}
for $(\zeta,\eta)\in (0,n)^2$.
In order to prove Theorem~\ref{thm1}, we shall show the convergence (with respect to an appropriate topology) of a subsequence of $(g_n)_{n\ge 1}$ towards a weak solution to \eqref{sce}--\eqref{1in1}. For that purpose, we now derive several estimates and first recall that, since $g^{in}\in L^1_{-2\beta, 1}(0, \infty)$, a refined version of de la Vall\'{e}e-Poussin theorem, see \cite{Le:1977} or \cite[Theorem~8]{Laurencot:2015}, guarantees that there exist two non-negative and convex functions $\sigma_1$ and $\sigma_2$ in $\mathcal{C}^2([0,\infty))$ such that $\sigma_1'$ and $\sigma_2'$ are concave,
\begin{equation}\label{convex1}
\sigma_i(0)=\sigma_i'(0)= 0,\ \ \lim_{x \to {\infty}}\frac{\sigma_i(x)}{x}=\infty,\ \ \ i=1,2,
\end{equation}
and
\begin{equation}\label{convex2}
\mathcal{I}_1 := \int_0^{\infty}\sigma_1({\zeta}) g^{in}({\zeta})d{\zeta}<\infty,\ \ \text{and}\ \ \mathcal{I}_2 := \int_0^{\infty}{\sigma_2\left( {\zeta}^{-\beta}g^{in}({\zeta}) \right)}d{\zeta}<\infty.
\end{equation}
Let us state the following properties of the above defined functions $\sigma_1$ and $\sigma_2$ which are required to prove Theorem~\ref{thm1}.
\begin{lem}\label{lemmaconvex} For $(x,y)\in (0,\infty)^2$, there holds
\begin{eqnarray*}
\hspace{-5cm}(i)~~~\sigma_2(x)\leq x\sigma_2'(x)\leq 2\sigma_2(x),\
\end{eqnarray*}
\begin{eqnarray*}
\hspace{-5cm}(ii)~~ x\sigma_2'(y)\leq \sigma_2(x)+ \sigma_2(y),\
\end{eqnarray*}
and
\begin{eqnarray*}
(iii)~0\leq \sigma_1(x+y)-\sigma_1(x)-\sigma_1(y)\leq 2\frac{x\sigma_1(y)+y\sigma_1(x)}{x+y}.
\end{eqnarray*}
\end{lem}
\begin{proof}
A proof of the statements $(i)$ and $(iii)$ may be found in \cite[Proposition~14]{Laurencot:2015} while $(ii)$ can easily be deduced from $(i)$ and the convexity of $\sigma_2$.
\end{proof}
We recall that throughout this section, the coagulation kernel $\Psi$ is assumed to satisfy $(H1)$--$(H2)$ and $g^{in}$ is a non-negative function in $L^1_{-2\beta, 1}(0,\infty )$.
\subsection{Moment estimates}
We begin with a uniform bound in $L_{-2\beta,1}^1(0,\infty)$.
\begin{lem}\label{LemmaEquibound}
There exists a positive constant $\mathcal{B}>0$ depending only on $g^{in}$ such that, for $t\ge 0$,
\begin{equation*}
\int_0^n \left( {\zeta}+{\zeta}^{-2\beta} \right) g_n(\zeta,t)d{\zeta} \leq \mathcal{B}.
\end{equation*}
\end{lem}
\begin{proof} Let $\delta\in (0,1)$ and take $\omega(\zeta)=(\zeta+\delta)^{-2\beta}$, $\zeta\in (0,n)$, in \eqref{nctp3}. With this choice of $\omega$,
\begin{equation*}
H_{\omega,n}^\theta(\zeta,\eta) \le \left[ (\zeta+\eta+\delta)^{-2\beta} - (\zeta+\delta)^{-2\beta} - (\eta+\delta)^{-2\beta} \right] \chi_{(0,n)}(\zeta+\eta) \le 0
\end{equation*}
for all $(\zeta,\eta)\in (0,n)^2$, so that \eqref{nctp3} entails that, for $t\ge 0$,
\begin{equation*}
\int_0^n (\zeta+\delta)^{-2\beta} g_n(\zeta,t) d\zeta \le \int_0^n (\zeta+\delta)^{-2\beta} g_n^{in}(\zeta) d\zeta \le \int_0^\infty \zeta^{-2\beta} g^{in}(\zeta) d\zeta.
\end{equation*}
We then let $\delta\to 0$ in the previous inequality and deduce from Fatou's lemma that
\begin{equation*}
\int_0^n \zeta^{-2\beta} g_n(\zeta,t) d\zeta \le \int_0^\infty \zeta^{-2\beta} g^{in}(\zeta) d\zeta,\ \qquad t\ge 0.
\end{equation*}
Combining the previous estimate with \eqref{masstn} gives Lemma~\ref{LemmaEquibound} with $\mathcal{B} := \|g^{in}\|_{L^1_{-2\beta, 1}(0,\infty)}$.
\end{proof}
We next turn to the control of the tail behavior of $g_n$ for large volumes, a step which is instrumental in the proof of the convergence of each integral on the right-hand side of \eqref{tncfe} to their respective limits on the right-hand side of \eqref{sce}.
\begin{lem}\label{massconservation}
For $T>0$, there is a positive constant $\Gamma(T)$ depending on $k$, $\sigma_1$, $g^{in}$, and $T$ such that,
\begin{eqnarray*}
(i)~~\sup_{t\in [0,T]}\int_0^n \sigma_1(\zeta) g_n(\zeta,t) d\zeta\leq \Gamma(T),
\end{eqnarray*}
and
\begin{eqnarray*}
(ii)~~ (1-\theta) \int_0^T \int_{1}^n \int_{1}^n \sigma_1(\zeta) \chi_{(0,n)}(\zeta+\eta) \Psi(\zeta,\eta) g_n({\zeta},s) g_n({\eta},s) d\eta d\zeta ds\leq \Gamma(T).
\end{eqnarray*}
\end{lem}
\begin{proof}
Let $T>0$ and $t\in (0,T)$. We set $\omega(\zeta)=\sigma_1(\zeta)$, $\zeta\in (0,n)$, into \eqref{nctp3} and obtain
\begin{align*}
& \int_0^n \sigma_1(\zeta)[g_n(\zeta,t)-g_n^{in}(\zeta)]d\zeta \\
& = \frac{1}{2} \int_0^t \int_{1/n}^n \int_{1/n}^n \tilde{\sigma}_1(\zeta,\eta) \chi_{(0,n)}(\zeta+\eta) \Psi(\zeta,\eta) g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds \\
& \quad - \frac{1-\theta}{2} \int_0^t \int_{1/n}^n \int_{1/n}^n [\sigma_1(\zeta)+\sigma_1(\eta)] \chi_{[n,\infty)}(\zeta+\eta) \Psi(\zeta,\eta) g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds,
\end{align*}
recalling that $\tilde{\sigma}_1(\zeta,\eta) = \sigma_1(\zeta+\eta)-\sigma_1(\zeta)-\sigma_1(\eta)$, hence, using $(H2)$ and Lemma~\ref{lemmaconvex},
\begin{equation*}
\int_0^n \sigma_1(\zeta)[g_n(\zeta,t)-g_n^{in}(\zeta)]d\zeta \le \frac{k}{2} \sum_{i=1}^4 J_{i,n}(t) - (1-\theta) R_n(t),
\end{equation*}
with
\begin{align*}
J_{1,n}(t) & := \int_0^t \int_0^1 \int_0^1 \tilde{\sigma}_1(\zeta,\eta) (\zeta\eta)^{-\beta} g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds, \\
J_{2,n}(t) & := \int_0^t \int_0^1 \int_1^n \tilde{\sigma}_1(\zeta,\eta) \zeta^{-\beta} \eta g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds, \\
J_{3,n}(t) & := \int_0^t \int_1^n \int_0^1 \tilde{\sigma}_1(\zeta,\eta) \zeta \eta^{-\beta} g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds, \\
J_{4,n}(t) & := \int_0^t \int_1^n \int_1^n \tilde{\sigma}_1(\zeta,\eta) (\zeta+\eta) g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds,
\end{align*}
and
\begin{equation*}
R_n(t) := \int_0^t \int_{1/n}^n \int_{1/n}^n \sigma_1(\zeta) \chi_{[n,\infty)}(\zeta+\eta) \Psi(\zeta,\eta) g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds.
\end{equation*}
Owing to the concavity of $\sigma_1'$ and the property $\sigma_1(0)=0$, there holds
\begin{equation}
\tilde{\sigma}_1(\zeta,\eta) = \int_0^\zeta \int_0^\eta \sigma_1''(x+y) dydx \le \sigma_1''(0) \zeta \eta\ , \qquad (\zeta,\eta)\in (0,\infty)^2. \label{PhL2}
\end{equation}
By \eqref{PhL2}, Lemma~\ref{LemmaEquibound}, and Young's inequality,
\begin{align*}
J_{1,n}(t) & \le \sigma_1''(0) \int_0^t \int_0^1 \int_0^1 \zeta^{1-\beta} \eta^{-\beta} g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds \\
& \le \sigma_1''(0) \int_0^t \left[ \int_0^1 \left( \zeta + \zeta^{-2\beta} \right) g_n(\zeta,s) d\zeta \right]^2 ds \le \sigma_1''(0) \mathcal{B}^2 t.
\end{align*}
Next, Lemma~\ref{lemmaconvex}~$(iii)$, Lemma~\ref{LemmaEquibound}, and Young's inequality give
\begin{align*}
J_{2,n}(t) = J_{3,n}(t) & \le 2 \int_0^t \int_0^1 \int_1^n \frac{\zeta \sigma_1(\eta) + \eta \sigma_1(\zeta)}{\zeta+\eta} \zeta^{-\beta} \eta g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds \\
& \le 2 \int_0^t \int_0^1 \int_1^n \left[ \zeta^{1-\beta} \sigma_1(\eta) + \sigma_1(1) \zeta^{-\beta} \eta \right] g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds \\
& \le 2 \int_0^t \left[ \int_0^1 \left( \zeta + \zeta^{-2\beta} \right) g_n(\zeta,s) d\zeta \right] \left[ \int_1^n \sigma_1(\eta) g_n(\eta,s) d\eta \right] ds \\
& \quad + \sigma_1(1) \int_0^t \left[ \int_0^1 \left( \zeta + \zeta^{-2\beta} \right) g_n(\zeta,s) d\zeta \right] \left[ \int_1^n \eta g_n(\eta,s) d\eta \right] ds \\
& \le 2 \sigma_1(1) \mathcal{B}^2 t + 2 \mathcal{B} \int_0^t \int_0^n \sigma_1(\eta) g_n(\eta,s) d\eta ds,
\end{align*}
and
\begin{align*}
J_{4,n}(t) & \le 2 \int_0^t \int_1^n \int_1^n \left( \eta \sigma_1(\zeta) + \zeta \sigma_1(\eta) \right) g_n(\zeta,s) g_n(\eta,s) d\eta d\zeta ds \\
& \le 4 \mathcal{B} \int_0^t \int_0^n \sigma_1(\eta) g_n(\eta,s) d\eta ds.
\end{align*}
Gathering the previous estimates, we end up with
\begin{align*}
\int_0^n \sigma_1(\zeta)[g_n(\zeta,t)-g_n^{in}(\zeta)]d\zeta & \le k \left( \frac{\sigma_1''(0)}{2} + 2 \sigma_1(1) \right) \mathcal{B}^2 t \\
& \quad + 4k\mathcal{B} \int_0^t \int_0^n \sigma_1(\eta) g_n(\eta,s) d\eta ds - (1-\theta) R_n(t),
\end{align*}
and we infer from Gronwall's lemma and \eqref{convex2} that
\begin{align*}
\int_0^n \sigma_1(\zeta) g_n(\zeta,t) d\zeta + (1-\theta) R_n(t) & \le e^{4k\mathcal{B}t} \int_0^n \sigma_1(\zeta) g_n^{in}(\zeta) d\zeta + \left( \frac{\sigma_1''(0)}{8} + \frac{\sigma_1(1)}{2} \right) \mathcal{B} e^{4k\mathcal{B}t} \\
& \le \left[ \mathcal{I_1} + \left( \sigma_1''(0) + \sigma_1(1) \right) \mathcal{B} \right] e^{4k\mathcal{B}t}.
\end{align*}
This completes the proof of Lemma~\ref{massconservation}.
\end{proof}
\subsection{Uniform integrability}
Next, our aim being to apply Dunford-Pettis' theorem, we have to prevent concentration of the sequence $(g_n)_{n\ge 1}$ on sets of arbitrary small measure. For that purpose, we need to show the following result.
\begin{lem}\label{LemmaEquiintegrable}
For any $T>0$ and $\lambda >0$, there is a positive constant $\mathcal{L}_1(\lambda, T)$ depending only on $k$, $\sigma_2$, $g^{in}$, ${\lambda}$, and $T$ such that
\begin{equation*}
\sup_{t\in [0,T]}\int_0^{\lambda} \sigma_2\left( {\zeta}^{-\beta} g_n(\zeta,t) \right) d\zeta \leq \mathcal{L}_1({\lambda},T).
\end{equation*}
\end{lem}
\begin{proof} {For $(\zeta,t)\in (0,n)\times (0,\infty)$, we set $u_n({\zeta},t):={\zeta}^{-\beta} g_n(\zeta,t)$. Let $\lambda\in (1,n)$, $T>0$, and $t\in (0,T)$. Using Leibniz's rule, Fubini's theorem, and (\ref{tncfe}), we obtain
\begin{align}\label{Equiint1}
\hspace{-.2cm}\frac{d}{dt}\int_0^{\lambda} \sigma_2(u_n({\zeta},t)) d\zeta \leq & \frac{1}{2}\int_0^{\lambda}\hspace{-.1cm}\int_0^{\lambda -\eta} \sigma_2{'}(u_n({\zeta+\eta},t)) {(\zeta+\eta)}^{-\beta} \Psi_n^\theta({\zeta},{\eta}) g_n({\zeta},t) g_n(\eta,t) d{\zeta} d{\eta}.
\end{align}
It also follows from $(H2)$ that
\begin{equation}
\Psi_n^\theta(\zeta,\eta) \le \Psi(\zeta,\eta) \le 2k \lambda^{1+2\beta} (\zeta \eta)^{-\beta}, \qquad (\zeta,\eta)\in (0,\lambda)^2. \label{PhL3}
\end{equation}
We then infer from \eqref{Equiint1}, \eqref{PhL3}, Lemma~\ref{lemmaconvex}~$(ii)$ and Lemma~\ref{LemmaEquibound} that
\begin{align*}
\frac{d}{dt}\int_0^{\lambda} \sigma_2(u_n({\zeta},t))d{\zeta} \leq & k \lambda^{1+2\beta} \int_0^{\lambda} \int_0^{{\lambda}-{\eta}}\sigma_2^{'}(u_n({\zeta}+{\eta},t)) ({\zeta}+{\eta})^{-\beta} u_n(\zeta,t) u_n(\eta,t)d{\zeta} d{\eta}\nonumber\\
\leq & k \lambda^{1+2\beta} \int_0^{\lambda} \int_0^{{\lambda}-{\eta}} {\eta}^{-\beta} \left[ \sigma_2(u_n(\zeta+\eta,t)) + \sigma_2(u_n(\zeta,t)) \right] u_n(\eta,t) d\zeta d\eta\nonumber\\
\leq & 2 k \lambda^{1+2\beta} \int_0^{\lambda} \eta^{-2\beta} g_n(\eta,t) \int_0^{{\lambda}-{\eta}} \sigma_2(u_n({\zeta}+{\eta},t)) d\zeta d\eta\nonumber\\
\leq & 2 k \lambda^{1+2\beta} \mathcal{B} \int_0^\lambda \sigma_2(u_n(\zeta,t)) d\zeta.
\end{align*}}
Then, using Gronwall's lemma, the monotonicity of $\sigma_2$, and \eqref{convex2}, we obtain
\begin{eqnarray*}
\int_0^{\lambda} \sigma_2({\zeta}^{-\beta} g_n(\zeta,t)) d\zeta \leq \mathcal{L}_1({\lambda}, T),
\end{eqnarray*}
where $\mathcal{L}_1({\lambda, T}):=\mathcal{I}_2 e^{2 k \lambda^{1+2\beta} \mathcal{B} T}$, and the proof is complete.
\end{proof}
\subsection{Time equicontinuity}
The outcome of the previous sections settles the (weak) compactness issue with respect to the volume variable. We now turn to the time variable.
\begin{lem}\label{timequic}
Let $t_2\ge t_1 \ge 0$ and $\lambda\in (1,n)$. There is a positive constant $\mathcal{L}_2(\lambda)$ depending only on $k$, $g^{in}$, and $\lambda$ such that
\begin{equation*}
\int_0^\lambda \zeta^{-\beta} |g_n(\zeta,t_2) - g_n(\zeta,t_1)| d\zeta \le \mathcal{L}_2(\lambda) (t_2-t_1).
\end{equation*}
\end{lem}
\begin{proof}
Let $t>0$. On the one hand, by Fubini's theorem, \eqref{PhL3}, and Lemma~\ref{LemmaEquibound},
\begin{align*}
\int_0^\lambda \zeta^{-\beta} \mathcal{B}_c(g_n)(\zeta,t) d\zeta & \le \frac{1}{2} \int_0^\lambda \int_0^{\lambda-\zeta} (\zeta+\eta)^{-\beta} \Psi(\zeta,\eta) g_n(\zeta,t) g_n(\eta,t) d\eta d\zeta \\
& \le k \lambda^{1+2\beta} \int_0^\lambda \int_0^\lambda \zeta^{-\beta} \eta^{-2\beta} g_n(\zeta,t) g_n(\eta,t) d\eta d\zeta \\
& \le k \lambda^{1+3\beta} \left( \int_0^\lambda \zeta^{-2\beta} g_n(\zeta,t) d\zeta \right)^2 \le k \lambda^{1+3\beta} \mathcal{B}^2.
\end{align*}
On the other hand, since
\begin{equation*}
\Psi_n^\theta(\zeta,\eta) \le \Psi(\zeta,\eta) \le 2k \lambda^\beta \eta \zeta^{-\beta},\ \qquad 0 < \zeta < \lambda < \eta < n,
\end{equation*}
we infer from \eqref{PhL3} and Lemma~\ref{LemmaEquibound} that
\begin{align*}
\int_0^\lambda \zeta^{-\beta} \mathcal{D}_{c,n}^\theta(g_n)(\zeta,t) d\zeta & \le \int_0^\lambda \int_0^n \zeta^{-\beta} \Psi(\zeta,\eta) g_n(\zeta,t) g_n(\eta,t) d\eta d\zeta \\
& \le 2k\lambda^{1+2\beta} \int_0^\lambda \int_0^\lambda \zeta^{-2\beta} \eta^{-\beta} g_n(\zeta,t) g_n(\eta,t) d\eta d\zeta \\
& \quad + 2k\lambda^\beta \int_0^\lambda \int_\lambda^n \zeta^{-\beta} \eta g_n(\zeta,t) g_n(\eta,t) d\eta d\zeta \\
& \le 2k \mathcal{B}^2 (1+\lambda^{1+\beta}) \lambda^{\beta}.
\end{align*}
Consequently, by \eqref{tncfe},
\begin{align*}
\int_0^\lambda \zeta^{-\beta} |g_n(\zeta,t_2) - g_n(\zeta,t_1)| d\zeta & \le \int_{t_1}^{t_2} \int_0^\lambda \zeta^{-\beta} \left| \frac{\partial g_n}{\partial t}(\zeta,t) \right| d\zeta dt \\
& \le \int_{t_1}^{t_2} \int_0^\lambda \zeta^{-\beta} \left[ \mathcal{B}_c(g_n)(\zeta,t) + \mathcal{D}_{c,n}^\theta(g_n)(\zeta,t) \right] d\zeta \\
& \le k \mathcal{B}^2 (2+2\lambda^{1+\beta}+\lambda^{1+2\beta}) \lambda^{\beta} (t_2-t_1),
\end{align*}
which completes the proof with $\mathcal{L}_2(\lambda):=k \mathcal{B}^2 (2+2\lambda^{1+\beta}+\lambda^{1+2\beta}) \lambda^{\beta}$.
\end{proof}
\subsection{Convergence}
We are now in a position to complete the proof of the existence of a weak solution to the SCE \eqref{sce}--\eqref{1in1}.
\begin{proof}[Proof of Theorem~\ref{thm1}] For $(\zeta,t)\in (0,n)\times (0,\infty)$, we set $u_n({\zeta},t):={\zeta}^{-\beta} g_n(\zeta,t)$. Let $T>0$ and $\lambda>1$. Owing to the superlinear growth \eqref{convex1} of $\sigma_2$ at infinity and Lemma~\ref{LemmaEquiintegrable}, we infer from Dunford-Pettis' theorem that there is a weakly compact subset $\mathcal{K}_{\lambda,T}$ of $L^1(0,\lambda)$ such that $(u_n(t))_{n\ge 1}$ lies in $\mathcal{K}_{\lambda,T}$ for all $t\in [0,T]$. Moreover, by Lemma~\ref{timequic}, $(u_n)_{n\ge 1}$ is strongly equicontinuous in $L^1(0,\lambda)$ at all $t\in (0,T)$ and thus also weakly equicontinuous in $L^1(0,\lambda)$ at all $t\in (0,T)$. A variant of \emph{Arzel\`{a}-Ascoli's theorem} \cite[Theorem~1.3.2]{Vrabie:1995} then guarantees that $(u_n)_{n\ge 1}$ is relatively compact in $\mathcal{C}_w([0,T];L^1(0,\lambda))$. This property being valid for all $T>0$ and $\lambda>1$, we use a diagonal process to obtain a subsequence of $(g_n)_{n\ge 1}$ (not relabeled) and a non-negative function $g$ such that
\begin{equation*}
g_n \longrightarrow g ~~~\mbox{in} ~~\mathcal{C}_w([0,T]; L^1(0,\lambda))
\end{equation*}
for all $T>0$ and $\lambda>1$. Owing to Lemma~\ref{massconservation} and the superlinear growth \eqref{convex1} of $\sigma_1$ at infinity, a by-now classical argument allows us to improve the previous convergence to
\begin{equation}
g_n \longrightarrow g ~~~\mbox{in} ~~\mathcal{C}_w([0,T]; L^1((0,\infty);(\zeta^{-\beta}+\zeta)d\zeta)). \label{PhL4}
\end{equation}
To complete the proof of Theorem~\ref{thm1}, it remains to show that $g$ is a weak solution to the SCE \eqref{sce}--\eqref{1in1} on $[0,\infty)$ in the sense of Definition~\ref{definition}. This step is carried out by the classical approach of \cite{Stewart:1989} with some modifications as in \cite{Camejo:2015I, Camejo:2015II} and \cite{Laurencot:2002L} to handle the convergence of the integrals for small and large volumes, respectively. In particular, on the one hand, the behavior for large volumes is controlled by the estimates of Lemma~\ref{massconservation} with the help of the superlinear growth \eqref{convex1} of $\sigma_1$ at infinity and the linear growth $(H2)$ of $\Psi$. On the other hand, the behavior for small volumes is handled by $(H2)$, Lemma~\ref{LemmaEquibound}, and \eqref{PhL4}.\\
Finally, $g$ being a weak solution to \eqref{sce}--\eqref{1in1} on $[0,\infty)$ in the sense of Definition~\ref{definition}, it is mass-conserving according to Theorem~\ref{themcs}, which completes the proof of Theorem~\ref{thm1}.
\end{proof}
\section*{Acknowledgments}
This work was supported by University Grant Commission (UGC), India, for providing Ph.D fellowship to PKB. AKG would like to thank Science and Engineering Research Board (SERB), Department of Science and Technology (DST), India for providing funding support through the project $YSS/2015/001306$.
\bibliographystyle{plain} | {"config": "arxiv", "file": "1804.00853/PKB_AKG_PL_20180328.tex"} |
TITLE: Proof of chain rule on wikipedia: what does this sentence mean?
QUESTION [1 upvotes]: On wikipedia it says the following on the first chain rule proof:
$\lim_{x \to a} \frac{f(g(x)) - f(g(a))}{g(x) - g(a)} \cdot \frac{g(x) - g(a)}{x - a}$
When $g$ oscillates near $a$, then it might happen that no matter how close one gets to $a$, there is always an even closer $x$ such that $g(x)$ equals $g(a)$. For example, this happens for $g(x) = x^2sin(\frac 1 x)$ near the point $a = 0$. Whenever this happens, the above expression is undefined because it involves division by zero.
I can see that with the function $g(x) = x^2sin(\frac 1 x)$ when $x$ approaches 0 that the amplitude goes to 0 and the frequency goes to infinity, and also know that by the squeeze theorem $\lim_{x \to 0} g(x) = 0$, but $g(a)$ at $a=0$ is undefined, since $(0^2)sin(1/0)=undefined$, so how can $g(x) - g(a) = 0$ or $g(x)=g(a)$? If the statement is not true at the point 0, but only near the point 0 how can it be shown that $g(x)=g(a)$ at such a point?
REPLY [0 votes]: Wikipedia article is actually trying to point out a common flaw seen in most usual proofs of chain rule. The typical proof uses the limit expression given in your question and then says that first factor tends to $f'(g(a)) $ and second factor tends to $g'(a) $. Thus the final limit is $f'(g(a)) g'(a) $ and proof of chain rule is complete. Wikipedia is saying that this does not work when $g(x) = g(a)$ as $x\to a$. In such cases the proof of chain rule has to proceed in a different manner. Wikipedia uses the phrase "to work around this" to show that the proof has an issue which needs to be circumvented somehow.
When $g(x) =g(a) $ as $x\to a$ then we have $g'(a) =0$ and one needs to prove that $(f\circ g) '(a) =0$ in order to establish the chain rule. | {"set_name": "stack_exchange", "score": 1, "question_id": 2079790} |
TITLE: Compactness and Limit points
QUESTION [3 upvotes]: If $A$ is compact how can it be shown that any sequence $\{x_n\}$ in $A$ has a limit point in $A$?
I know this is proven in a lot of textbooks but I'm finding this hard to conceptualise.
REPLY [2 votes]: I assume that $\{x_n\}_{n \in \mathbb{N}}$ is infinite, otherwise the statement is false (if you mean accumulation point); suppose that such a set hasn't got an accumulation point, then it's an infinite discrete closed subset of a compact space, which is impossible. In fact having no accumulation points implies that your subset is closed in $A$ and so it's compact; for each point there 's an open set which contains only a point of your subset (no limit points) and so that open cover has no finite subcover.
(I still have the doubt that you mean cluster point) | {"set_name": "stack_exchange", "score": 3, "question_id": 424473} |
TITLE: Summation of series in powers of x with certain combinations as coefficients
QUESTION [6 upvotes]: How can I find the sum: $$\sum_{k=0}^{n} \binom{n-k}{k}x^{k}$$
Edit: The answer to this question is: $$\frac{{(1+\sqrt{1+4x})}^{n+1}-{(1-\sqrt{1+4x})}^{n+1}}{2^{n+1}\sqrt{1+4x}}$$ I don't know how to arrive at this answer.
REPLY [4 votes]: This answer is similar to this answer and this answer. That is, compute the generating function of the given sequence:
$$
\begin{align}
&\sum_{n=0}^\infty\sum_{k=0}^n\binom{n-k}{k}x^ky^n\\
&=\sum_{k=0}^\infty\sum_{n=k}^\infty\binom{n-k}{k}x^ky^n\tag{1}\\
&=\sum_{k=0}^\infty\sum_{n=0}^\infty\binom{n}{k}x^ky^{n+k}\tag{2}\\
&=\sum_{k=0}^\infty\frac{(xy^2)^k}{(1-y)^{k+1}}\tag{3}\\
&=\frac1{1-y}\frac1{1-\frac{xy^2}{1-y}}\tag{4}\\
&=\frac1{1-y-xy^2}\tag{5}\\
&=\frac1{\left(1-\frac{1-\sqrt{1+4x}}2y\right)\left(1-\frac{1+\sqrt{1+4x}}2y\right)}\tag{6}\\
&=\frac{\frac{-1+\sqrt{1+4x}}{2\sqrt{1+4x}}}{1-\frac{1-\sqrt{1+4x}}2y}+\frac{\frac{1+\sqrt{1+4x}}{2\sqrt{1+4x}}}{1-\frac{1+\sqrt{1+4x}}2y}\tag{7}\\
&=\frac1{\sqrt{1+4x}}\sum_{n=0}^\infty\left[\left(\frac{1+\sqrt{1+4x}}2\right)^{n+1}-\left(\frac{1-\sqrt{1+4x}}2\right)^{n+1}\right]y^n\tag{8}
\end{align}
$$
Explanation:
$(1)$: change order of summation
$(2)$: substitute $n\mapsto n+k$
$(3)$: $\sum\limits_{n\ge k}\binom{n}{k}y^n=\frac{y^k}{(1-y)^{k+1}}$
$(4)$: $\sum\limits_{k\ge 0}x^k=\frac1{1-x}$
$(5)$: simplify
$(6)$: quadratic formula
$(7)$: partial fractions
$(8)$: $\sum\limits_{n\ge 0}x^n=\frac1{1-x}$
Equating coefficients of $y^n$ yields
$$
\sum_{k=0}^n\binom{n-k}{k}x^k
=\frac1{\sqrt{1+4x}}\left[\left(\frac{1+\sqrt{1+4x}}2\right)^{n+1}-\left(\frac{1-\sqrt{1+4x}}2\right)^{n+1}\right]
$$ | {"set_name": "stack_exchange", "score": 6, "question_id": 1324616} |
\begin{document}
\title[On integrality properties of hypergeometric series]{On integrality properties of \\ hypergeometric series}
\author{Alan Adolphson}
\address{Department of Mathematics\\
Oklahoma State University\\
Stillwater, Oklahoma 74078}
\email{[email protected]}
\author{Steven Sperber}
\address{School of Mathematics\\
University of Minnesota\\
Minneapolis, Minnesota 55455}
\email{[email protected]}
\date{\today}
\keywords{}
\subjclass{}
\begin{abstract}
Let $A$ be a set of $N$ vectors in ${\mathbb Z}^n$ and let $v$ be a vector in~${\mathbb C}^N$
that has minimal negative support for $A$. Such a vector $v$ gives rise to a formal series
solution of the $A$-hypergeometric system with parameter $\beta=Av$. If $v$ lies
in~${\mathbb Q}^n$, then this series has rational coefficients. Let $p$ be a prime number.
We characterize those $v$ whose coordinates are rational, $p$-integral, and lie in the closed
interval $[-1,0]$ for which the corresponding normalized series solution has $p$-integral
coefficients. From this we deduce further integrality results for hypergeometric series.
\end{abstract}
\maketitle
\section{Introduction}
The $p$-integrality properties of hypergeometric series were first examined by Dwork\cite{D1,D2}, later contributions are due to Christol\cite{C} and the authors\cite{AS}. Here we deduce further integrality properties from \cite[Theorems~1.5 and~3.5]{AS}, whose proofs are included to make this article self-contained. Recently Delaygue\cite{D} and Delaygue-Rivoal-Roques\cite{DRR} applied certain integrality criteria for hypergeometric series in their study of mirror maps. The integrality criterion of Theorem~5.6 generalizes those criteria. The recent work of Franc-Gannon-Mason\cite{FGM} motivated us to study the $p$-adic unboundedness of hypergeometric series, which is described in Section~7.
Let $A=\{ {\bf a}_1,\dots,{\bf a}_N \}\subseteq{\mathbb Z}^n$ and let $L\subseteq{\mathbb Z}^N$
be the lattice of relations on $A$:
\[ L = \bigg\{l=(l_1,\dots,l_N)\in{\mathbb Z}^N\;\bigg|\; \sum_{i=1}^N l_i{\bf a}_i = {\bf 0}
\bigg\}. \]
Let $\beta = (\beta_1,\dots,\beta_n)\in{\mathbb C}^n$. The {\it $A$-hypergeometric system with
parameter $\beta$\/} is the system of partial differential operators in $\lambda_1,\dots,
\lambda_N$ consisting of the {\it box operators\/}
\begin{equation}
\Box_l = \prod_{l_i>0} \bigg( \frac{\partial}{\partial \lambda_i}\bigg)^{l_i} - \prod_{l_i<0} \bigg(
\frac{\partial}{\partial \lambda_i}\bigg)^{-l_i} \quad\text{for $l\in L$}
\end{equation}
and the {\it Euler\/} or {\it homogeneity\/} operators
\begin{equation}
Z_i = \sum_{j=1}^N a_{ij}\lambda_j\frac{\partial}{\partial\lambda_j} -\beta_i\quad\text{for $i=1,
\dots,n$},
\end{equation}
where ${\bf a}_j = (a_{1j},\dots,a_{nj})$. If there is a linear form $h$ on ${\mathbb R}^n$ such
that $h({\bf a}_i)=1$ for $i=1,\dots,N$, we call this system {\it nonconfluent}; otherwise, we
call it {\it confluent}.
Let $v=(v_1,\dots,v_N)\in{\mathbb C}^N$. The {\it negative support\/} of $v$ is the set
\[ {\rm nsupp}(v) = \{i\in\{1,\dots,N\} | \text{ $v_i$ is a negative integer}\}. \]
Let
\[ L_v = \{l\in L\mid {\rm nsupp}(v+l) = {\rm nsupp}(v)\} \]
and put
\begin{equation}
\Phi_v(\lambda) = \sum_{l\in L_v} [v]_l \lambda^{v+l},
\end{equation}
where
\[ [v]_l = \prod_{i=1}^N [v_i]_{l_i} \]
and
\[ [v_i]_{l_i} = \begin{cases} 1 & \text{if $l_i=0$}, \\ \displaystyle \frac{1}{(v_i+1)(v_i+2)\cdots(v_i+l_i)} & \text{if $l_i>0$,} \\ v_i(v_i-1)\cdots(v_i+l_i+1) & \text{if $l_i<0$.} \end{cases} \]
Note that since $l\in L_v$, we have $l_i<-v_i$ if $v_i\in{\mathbb Z}_{<0}$, so $[v_i]_{l_i}$ is always well-defined.
The vector $v$ is said to have {\it minimal negative support\/} if there is no $l\in L$ for
which ${\rm nsupp}(v+l)$ is a proper subset of ${\rm nsupp}(v)$. The series $\Phi_v(\lambda)$
is a formal solution of the system (1.1), (1.2) for $\beta=\sum_{i=1}^N v_i{\bf a}_i$ if and only
if $v$ has minimal negative support (see Saito-Sturmfels-Takayama\cite[Proposition 3.4.13]{SST}).
Let $p$ be a prime number. In \cite{D1,D2}, Dwork introduced the idea of normalizing
hypergeometric series to have $p$-adic radius of convergence equal to~$1$. This involves
simply replacing each variable $\lambda_i$ by $\pi\lambda_i$, where $\pi$ is any uniformizer of ${\mathbb Q}_p(\zeta_p)$. Thus $\text{ord}_p\:\pi = 1/(p-1)$. (For further applications, Dwork chose $\pi$ to be a solution of
$\pi^{p-1} = -p$.) We define the $p$-adically normalized hypergeometric series to be
\begin{equation}
\Phi_{v,\pi}(\lambda) = \sum_{l\in L_v} [v]_l \pi^{\sum_{i=1}^N l_i}\lambda^{v+l}
\quad\big(=\pi^{-v}\Phi_v(\pi\lambda)\big).
\end{equation}
Note that for nonconfluent $A$-hypergeometric systems one has $\sum_{i=1}^N l_i = 0$, so in that
case we have $\Phi_{v,\pi}(\lambda) = \Phi_v(\lambda)$, i.e., the normalized series is just the usual
one. In this paper we study the $p$-integrality of the coefficients of the $\lambda^{v+l}$ in
$\Phi_{v,\pi}(\lambda)$.
Let ${\mathbb N}$ denote the set of nonnegative integers and ${\mathbb N}_+$ the set of positive integers. Every $t\in{\mathbb N}$ has a $p$-adic expansion
\[ t=t_0+t_1p+\cdots+t_{b-1}p^{b-1}, \quad\text{$0\leq t_j\leq p-1$ for all $j$.} \]
We define the {\it $p$-weight\/} of $t$ to be ${\rm wt}_p(t) = \sum_{j=0}^{b-1} t_j$.
This definition is extended to vectors of nonnegative integers componentwise:
if $s=(s_1,\dots,s_N)\in{\mathbb N}^N$, define ${\rm wt}_p(s) = \sum_{i=1}^N {\rm wt}_p(s_i)$.
Let $R_p$ be the set of all $p$-integral rational vectors $(r_1,\dots,r_N)\in({\mathbb Q}\cap{
\mathbb Z}_p)^N$ satisfying $-1\leq r_i\leq 0$ for $i=1,\dots,N$. For $r\in R_p$ choose a power
$p^a$ such that $(1-p^a)r\in{\mathbb N}^N$ and set $s=(1-p^a)r$. We define a {\it
weight function\/} $w_p$ on $R_p$ by setting $w_p(r) = {\rm wt}_p(s)/a$. The positive integer $a$ is not
uniquely determined by~$r$ but the ratio ${\rm wt}_p(s)/a$ is independent of the choice of $a$ and
depends only on $r$. Note that since $0\leq (1-p^a)r_i\leq p^a-1$ for all $i$ we have $0\leq
{\rm wt}_p(s)\leq aN(p-1)$ and $0\leq w_p(r)\leq N(p-1)$.
We consider $A$-hypergeometric systems (1.1), (1.2) for those $\beta$ for which the set
\[ R_p(\beta) = \bigg\{r=(r_1,\dots,r_N)\in R_p\;\bigg|\; \sum_{i=1}^N r_i{\bf a}_i = \beta\bigg\} \]
is nonempty. Define $w_p(R_p(\beta)) = \inf\{w_p(r)\mid r\in R_p(\beta)\}$. Trivially, if $v\in R_p(\beta)$
then $w_p(v)\geq w_p(R_p(\beta))$.
Our first main result is the following statement.
\begin{theorem}
If $v\in R_p(\beta)$, then the series $\Phi_{v,\pi}(\lambda)$ has $p$-integral coefficients if and only if
$w_p(v)=w_p(R_p(\beta))$. If $w_p(v)>w_p(R_p(\beta))$, then the coefficients of $\Phi_{v,\pi}(\lambda)$ are
$p$-adically unbounded.
\end{theorem}
Note that we do not assume in Theorem 1.5 that $v$ has minimal negative support, so the series
$\Phi_v(\lambda)$ is not necessarily a solution of the system (1.1), (1.2).
Direct application of Theorem 1.5 is limited by the fact that we do not know a general procedure for computing $w_p(R_p(\beta))$ or for determining whether there exists $v\in R_p(\beta)$ such that $w_p(v) = w_p(R_p(\beta))$. However, we do give a lower bound for $w_p(R_p(\beta))$ (Theorem 4.6). In many cases of interest, one can find $v\in R_p(\beta)$ for which $w_p(v)$ equals this lower bound. This implies by the definitions that $w_p(v)=w_p(R_p(\beta))$, hence, by Theorem~1.5, $\Phi_{v,\pi}(\lambda)$ has $p$-integral coefficients. This leads to a useful condition for certain classical hypergeometric series to have $p$-integral coefficients (Theorem~5.6) and for the series $\Phi_v(\lambda)$ to have integral coefficients (Theorem~6.3).
In the other direction, we use Theorem 1.5 to show that if the series (1.4) fails to have $p$-integral coefficients for some prime $p$ for which $v$ is $p$-integral, then it fails to have $p$-integral coefficients for infinitely many primes. This will imply by a generalization of a theorem of Eisenstein that if $\Phi_v(\lambda)$ is an algebraic function, then it must have $p$-integral coefficients for all primes $p$ for which $v$ is $p$-integral (see Section 7).
\section{Proof of Theorem 1.5}
For $t\in{\mathbb N}$, set
\[ \alpha_p(t) = \frac{t-{\rm wt}_p(t)}{p-1}\in{\mathbb N}. \]
Theorem 1.5 will follow from the well-known formula
\begin{equation}
{\rm ord}_p\: t! = \alpha_p(t).
\end{equation}
For $k\in{\mathbb N}$, $k\leq t$, put
\[ \beta_p(t,k) = \alpha_p(t) - \alpha_p(t-k) = \frac{k-{\rm wt}_p(t) + {\rm wt}_p(t-k)}{p-1}\in{\mathbb N}. \]
Then
\begin{equation}
{\rm ord}_p\: \frac{t!}{(t-k)!} = {\rm ord}_p\: \prod_{i=0}^{k-1} (t-i) = \beta_p(t,k).
\end{equation}
We extend (2.2) to $t\in{\mathbb Z}_p$. Write
\[ t = \sum_{i=0}^\infty t_ip^i,\quad \text{$0\leq t_i\leq p-1$ for all $i$,} \]
and set $t^{(b)} = \sum_{i=0}^{b-1} t_ip^i\in{\mathbb N}$. Then $t\equiv t^{(b)}\pmod{p^b}$.
\begin{lemma}
Let $k\in{\mathbb N}$ with $k\leq t^{(b)}$. One has
\[ \prod_{i=0}^{k-1} (t-i) \equiv \prod_{i=0}^{k-1} (t^{(b)}-i)\pmod{p^{\beta_p(t^{(b)},k)+1}}. \]
\end{lemma}
\begin{proof}
Write $t=t^{(b)}+p^b\epsilon$, where $\epsilon\in{\mathbb Z}_p$. We have
\[ \prod_{i=0}^{k-1} (t-i) =\prod_{i=0}^{k-1} (t^{(b)}-i) + \sum_{j=1}^k \sum_{0\leq i_1<\dots<i_j\leq k-1} M(i_1,\dots,i_j), \]
where
\[ M(i_1,\dots,i_j) = t^{(b)}(t^{(b)}-1)\cdots \widehat{(t^{(b)}-i_1)}\cdots \widehat{(t^{(b)}-i_j)}\cdots (t^{(b)}-k+1) (p^b\epsilon)^j. \]
We have $0<t^{(b)}-i_j<\dots<t^{(b)}-i_1\leq p^b-1$, so each of these $j$ positive integers has $p$-ordinal $<b$. This implies that
\[ {\rm ord}_p\:M(i_1,\dots,i_j)> {\rm ord}_p\:\prod_{i=0}^{k-1} (t^{(b)}-i) = \beta_p(t^{(b)},k), \]
where the last equality follows from (2.2).
\end{proof}
\begin{corollary}
For $t\in{\mathbb Z}_p$, $k\in{\mathbb N}$, $k\leq t^{(b)}$, one has
\[ {\rm ord}_p\: \prod_{i=0}^{k-1} (t-i) = \beta_p(t^{(b)},k). \]
\end{corollary}
We apply Corollary 2.4 to determine the $p$-divisibility of the coefficients of the series~(1.4). For $v=(v_1,\dots,v_N)\in{\mathbb Z}_p^N$, we write the $p$-adic expansions of the $v_i$ as
\[ v_i = \sum_{j=0}^\infty v_{ij}p^j,\quad\text{$0\leq v_{ij}\leq p-1$ for all $j$,} \]
and we set
\[ v_i^{(b)} = \sum_{j=0}^{b-1} v_{ij}p^j. \]
We put $v^{(b)} = (v_1^{(b)},\dots,v_N^{(b)})\in \{0,1,\dots,p^b-1\}^N$.
\begin{proposition}
Let $v\in{\mathbb Z}_p^N$ and let $l\in{\mathbb Z}^N$ with ${\rm nsupp}(v+l) = {\rm nsupp}(v)$. For all sufficiently large positive integers $b$, one has
\begin{equation}
0\leq v_i^{(b)} + l_i\leq p^b-1\quad\text{for $i=1,\dots,N$,}
\end{equation}
in which case
\begin{equation}
{\rm ord}_p\: [v]_l \pi^{\sum_{i=1}^N l_i} = \frac{{\rm wt}_p((v+l)^{(b)})-{\rm wt}_p(v^{(b)})}{p-1}.
\end{equation}
\end{proposition}
\begin{proof}
We show first that (2.7) follows from (2.6). Fix $i$, $1\leq i\leq N$. If $l_i<0$, then
\[ [v_i]_{l_i} = \prod_{k=0}^{-l_i-1} (v_i-k), \]
so by Corollary 2.4
\[ {\rm ord}_p\:[v_i]_{l_i} = \beta_p(v_i^{(b)},-l_i) = \frac{-l_i -{\rm wt}_p(v_i^{(b)}) + {\rm wt}_p(v_i^{(b)}+l_i)}{p-1}. \]
If $l_i>0$, then
\[ [v_i]_{l_i} = \bigg(\prod_{k=0}^{l_i-1} (v_i+l_i-k)\bigg)^{-1}, \]
so by Corollary 2.4
\[ {\rm ord}_p\:[v_i]_{l_i} = -\beta_p((v_i+l_i)^{(b)},l_i) = \frac{-l_i + {\rm wt}_p((v_i+l_i)^{(b)}) -{\rm wt}_p(v_i^{(b)})}{p-1}. \]
Note that (2.6) implies that $(v+l)^{(b)} = v^{(b)}+l$, so
\[ {\rm ord}_p\:[v]_l = \frac{{\rm wt}_p((v+l)^{(b)}) - {\rm wt}_p(v^{(b)})-\sum_{i=1}^N l_i}{p-1}. \]
This implies (2.7).
To prove (2.6) we consider three cases. Suppose first that $v_{ij}=0$ for all sufficiently large $j$. Then $v_i$ is a nonnegative integer and for all large $b$ we have $v_i = v_i^{(b)}$. If $l_i<0$, then $v_i +l_i\geq 0$ since ${\rm nsupp}(v+l) = {\rm nsupp}(v)$. If $l_i>0$, we can guarantee that $v_i+l_i\leq p^b-1$ simply by choosing $b$ to be sufficiently large.
Suppose next that $v_{ij} = p-1$ for all sufficiently large $j$. In this case we have for all sufficiently large $b$
\[ v_i = \sum_{j=0}^{b-1} v_{ij}p^j + p^b\sum_{j=0}^\infty (p-1)p^j = \sum_{j=0}^{b-1} v_{ij}p^j -p^b, \]
i.e., $v_i$ is a negative integer. If $l_i<0$, we will have $v_i^{(b)}+l_i\geq 0$ for $b$ sufficiently large: since $v_{ij}=p-1$ for all sufficiently large $j$, we can make $v_i^{(b)}$ arbitrarily large by choosing $b$ large. If $l_i>0$, then $v_i+l_i<0$ since ${\rm nsupp}(v+l) = {\rm nsupp}(v)$. But $v_i+l_i = v_i^{(b)}-p^b+l_i$, so $v_i^{(b)}+l_i<p^b$.
Finally, suppose that there are infinitely many $j$ for which $0<v_{ij}<p-1$. Suppose that $l_i<0$. Since $v_{ij}>0$ for infinitely many $j$, we can make $v_i^{(b)}$ arbitrarily large by choosing $b$ large and thus guarantee that $v_i^{(b)}+l_l\geq 0$. Suppose that $l_i>0$. Since $v_{ij}<p-1$ for infinitely many $j$, there are infinitely many $b$ such that
\[ v_i^{(b)}\leq \sum_{j=0}^{b-2}(p-1)p^j + (p-2)p^{b-1} = p^b-1-p^{b-1}. \]
By choosing such a $b$ large enough we can guarantee that $v_i^{(b)}+l_i\leq p^b-1$. But when this inequality holds for one $b$, it automatically holds for all larger $b$ as well.
\end{proof}
When $v\in R_p$, one has more control over the right-hand side of (2.7). If $b\in{\mathbb N}$ is chosen so that $(1-p^b)v\in{\mathbb N}^N$, then $(1-p^b)v=v^{(b)}$, so
\[ {\rm wt}_p(v^{(b)}) = b w_p(v). \]
If $b$ is sufficiently large and $l\in{\mathbb Z}^N$ satisfies ${\rm nsupp}(v+l) = {\rm nsupp}(v)$, then (2.6) implies that $v+(1-p^b)^{-1}l\in R_p$ as well, so $(1-p^b)v + l = v^{(b)}+l = (v+l)^{(b)}$ and
\[ {\rm wt}_p((v+l)^{(b)}) = bw_p(v+(1-p^b)^{-1}l). \]
For $v\in R_p$, Proposition 2.5 becomes the following.
\begin{corollary}
Let $v\in R_p$ and let $l\in{\mathbb Z}^N$ with ${\rm nsupp}(v+l) = {\rm nsupp}(v)$. Then for all sufficiently large $b\in{\mathbb N}_+$ satisfying $(1-p^b)v\in{\mathbb N}^N$ we have $v+(1-p^b)^{-1}l\in R_p$ and
\begin{equation}
{\rm ord}_p\: [v]_l\pi^{\sum_{i=1}^N l_i} = \frac{b}{p-1} \big(w_p(v+(1-p^b)^{-1}l)-w_p(v)\big).
\end{equation}
\end{corollary}
To complete the proof of Theorem 1.5 we need a lemma.
\begin{lemma}
Let $v\in R_p(\beta)$. Then $r\in R_p(\beta)$ if and only if $r=v+(1-p^b)^{-1}l$ for some $b\in{\mathbb N}_+$ satisfying $(1-p^b)v\in{\mathbb N}^N$ and some $l\in L_v$.
\end{lemma}
\begin{proof}
If $l\in L_v$, then Corollary 2.8 implies that $v+(1-p^b)^{-1}l\in R_p(\beta)$ for sufficiently large $b$ satisfying $(1-p^b)v\in{\mathbb N}^N$. Conversely, let $r\in R_p(\beta)$ and let $b\in{\mathbb N}_+$ be chosen so that $(1-p^b)v,(1-p^b)r\in{\mathbb N}^N$. Put $l=(1-p^b)(r-v)$, so that $r=v+(1-p^b)^{-1}l$. We claim that $l\in L_v$. Since $-1\leq v_i\leq 0$ for all $i$, to show that ${\rm nsupp}(v+l) = {\rm nsupp}(v)$ we need consider only the cases $v_i=-1$ and $v_i=0$. Suppose that $v_i=-1$ for some $i$. Then $0\leq r_i-v_i\leq 1$ so $1-p^b\leq l_i\leq 0$ and $v_i+l_i$ is a negative integer. If $v_i=0$ for some $i$, then $-1\leq r_i-v_i\leq 0$, so $0\leq l_i\leq p^b-1$ and $v_i+l_i\in{\mathbb N}$.
\end{proof}
\begin{proof}[Proof of Theorem $1.5$]
Fix $v\in R_p(\beta)$. By Lemma 2.10
\begin{multline*}
R_p(\beta) = \{v+(1-p^b)^{-1}l\mid \text{$b\in{\mathbb N}_+$, $(1-p^b)v\in{\mathbb N}^N$, $l\in L_v$,} \\ \text{and $v+(1-p^b)^{-1}l\in R_p$}\}.
\end{multline*}
The first sentence of Theorem~1.5 then follows from Corollary~2.8. If $w_p(v)>w_p(R_p(\beta))$, then there exists $r\in R_p(\beta)$ with $w_p(r)<w_p(v)$. Choose $b\in{\mathbb N}_+$ such that $(1-p^b)v,(1-p^b)r\in{\mathbb N}^N$. For each positive integer $c$, define $l^{(c)} = (1-p^{bc})(r-v)$. The proof of Lemma~2.10 shows that $l^{(c)}\in L_v$ and Corollary~2.8 gives
\begin{align*}
{\rm ord}_p\: [v]_l\pi^{\sum_{i=1}^N l^{(c)}_i} &= \frac{bc}{p-1} \big(w_p(v+(1-p^{bc})^{-1}l^{(c)})-w_p(v)\big) \\
&=\frac{bc}{p-1}\big(w_p(r)-w_p(v)\big).
\end{align*}
Since $w_p(r)-w_p(v)<0$ and $c$ is an arbitrary positive integer, this implies the second sentence of Theorem~1.5.
\end{proof}
\section{Elementary observations}
To avoid interrupting the discussion below, we give here some definitions and elementary results that will play a role in what follows.
Let $r$ be a rational number in the interval $[-1,0]$ and let $D$ be a positive integer such that $Dr\in{\mathbb Z}$. Let $h$ be a positive integer with $(h,D) = 1$. One checks that there is a unique rational number $r'\in[-1,0]$ with $Dr'\in{\mathbb Z}$ such that $r-hr'\in\{0,1,\dots,h-1\}$. We define $\phi_{h}(r) = r'$. We denote the $k$-fold interation of this map by $\phi_h^{(k)}$.
Similarly, let $r$ be a rational number in the interval $[0,1]$ and let $D$ be a positive integer such that $Dr\in{\mathbb Z}$. Let $h$ be a positive integer with $(h,D) = 1$. There is a unique rational number $r'\in [0,1]$ with $Dr'\in{\mathbb Z}$ such that $hr'-r\in \{0,1,\dots,h-1\}$. We define $\psi_h(r) = r'$.
These two maps are related as follows.
\begin{lemma}
Let $r$ be a rational number in the interval $[0,1]$, let $D$ be a positive integer such that $Dr\in{\mathbb Z}$, and let $h$ be a positive integer with $(h,D) = 1$. Then
\[ \psi_h(r) = -(\phi_h(-r))\quad\text{and}\quad \psi_h(r)-1 = \phi_h(r-1). \]
\end{lemma}
We record for later use the following proposition, whose proof is straightforward.
\begin{proposition}
Let $r\in[-1,0]$ (resp.\ $r\in[0,1]$) be a rational number, let $D$ be a positive integer for which $Dr\in{\mathbb Z}$, and let $h_1$ and $h_2$ be positive integers such that $(h_1,D) = (h_2,D) = 1$. If $h_1\equiv h_2\pmod{D}$, then $\phi_{h_1}(r) =
\phi_{h_2}(r)$ (resp.\ $\psi_{h_1}(r) = \psi_{h_2}(r)$).
\end{proposition}
\section{Lower bound for $w_p\big(R_p(\beta)\big)$}
Let $\Delta$ be the convex hull of the set $A\cup\{{\bf 0}\}$ and let $C(\Delta)$ be the real
cone generated by $\Delta$. For $\gamma\in C(\Delta)$, let $w_\Delta(\gamma)$ be the smallest
nonnegative real number $w$ such that $w\Delta$ (the dilation of $\Delta$ by the factor $w$)
contains $\gamma$. It is easily seen that
\begin{equation}
w_\Delta(\gamma) = \min\bigg\{\sum_{i=1}^N t_i\:\bigg|\: (t_1,\dots,t_N)\in
({\mathbb R}_{\geq 0})^N \text{ and } \sum_{i=1}^N t_i{\bf a}_i =\gamma \bigg\}.
\end{equation}
If $\gamma\in{\mathbb Q}^n\cap C(\Delta)$, then we may replace $({\mathbb R}_{\geq 0})^N$ by
$({\mathbb Q}_{\geq 0})^N$ in (4.1). For any subset $X\subseteq
C(\Delta)$, define $w_\Delta(X) = \inf\{w(\gamma)\mid\gamma\in X\}$.
We give another formula for the weight function $w_p$ on $R_p$. For $r$ a $p$-integral rational number in $[-1,0]$, we have defined $\phi_p(r)$ in Section~3. This definition is extended to elements of $R_p$ componentwise: if $r=(r_1,\dots,r_N)\in R_p$, put $\phi_p(r) = (\phi_p(r_1),\dots,\phi_p(r_N))$.
It is clear that, as element of ${\mathbb Z}_p$, a $p$-integral rational number $r$ in $[-1,0]$ has $p$-adic expansion
\begin{equation}
r = \sum_{\mu=0}^\infty (\phi_p^{(\mu)}(r) - p\phi_p^{(\mu+1)}(r)) p^\mu.
\end{equation}
If we choose a positive integer $a$ such that $(1-p^a)r\in{\mathbb N}$, i.~e., such that $\phi_p^{(a)}(r) = r$, then
\[ (1-p^a)r = \sum_{\mu=0}^{a-1} (\phi_p^{(\mu)}(r) - p\phi_p^{(\mu+1)}(r))p^\mu. \]
Since $r=\phi_p^{(0)}(r) = \phi_p^{(a)}(r)$, this gives
\[ {\rm wt}_p((1-p^a)r) = (1-p)\sum_{\mu=0}^{a-1} \phi_p^{(\mu)}(r). \]
The definition of the weight function $w_p$ on $R_p$ then gives the desired formula: if $r=(r_1,\dots,r_N)\in R_p$ and $(1-p^a)r\in{\mathbb N}^N$, then
\begin{equation}
w_p(r) = \frac{(1-p)}{a} \sum_{\mu=0}^{a-1}\sum_{i=1}^N \phi_p^{(\mu)}(r_i).
\end{equation}
Let $\sigma_{-\beta}$ be the smallest closed face of $C(\Delta)$ that contains $-\beta$, let $T$ be the
set of proper closed faces of $\sigma_{-\beta}$, and put
\[ \sigma^\circ_{-\beta} = \sigma_{-\beta}\setminus\bigcup_{\tau\in T} \tau. \]
By the minimality of $\sigma_{-\beta}$, $-\beta\not\in\tau$ for any $\tau\in T$, so $-\beta\in
\sigma_{-\beta}^\circ$.
\begin{lemma}
For all $r\in R_p(\beta)$ and all $\mu$, $-\sum_{i=1}^N \phi_p^{(\mu)}(r_i){\bf a}_i\in\sigma^\circ_{-\beta}$.
\end{lemma}
\begin{proof}
Since $-\beta\in\sigma_{-\beta}$ and $-\sum_{i=1}^N r_i{\bf a}_i = -\beta$ we must have $r_i=0$ for
all $i$ such that ${\bf a}_i\not\in \sigma_{-\beta}$. But $r_i=0$ implies $\phi_p^{(\mu)}(r_i)=0$, so
$-\sum_{i=1}^N \phi_p^{(\mu)}(r_i){\bf a}_i\in\sigma_{-\beta}$. Fix $\tau\in T$. Since $-\beta\not\in\tau$, we have $r_i\neq 0$ for some $i$ such that
${\bf a}_i\not\in\tau$. But $r_i\neq 0$ implies that $\phi_p^{(\mu)}(r_i)\neq 0$, so $-\sum_{i=1}^N
\phi_p^{(\mu)}(r_i){\bf a}_i\not\in\tau$.
\end{proof}
If $r,\tilde{r}\in R_p(\beta)$, then $\sum_{i=1}^N (r_i-\tilde{r}_i){\bf a}_i = {\bf 0}$, which implies that for all~$\mu$
\[ -\sum_{i=1}^N \phi_p^{(\mu)}(r_i){\bf a}_i\equiv -\sum_{i=1}^N \phi_p^{(\mu)}(\tilde{r}_i){\bf a}_i\pmod{{\mathbb Z}A}, \]
where ${\mathbb Z}A$ denotes the abelian group generated by $A$. We may thus choose $\beta^{(\mu)}\in{\mathbb Q}^n$ such that
$-\sum_{i=1}^N \phi_p^{(\mu)}(r_i){\bf a}_i\in -\beta^{(\mu)}+{\mathbb Z}A$ for all $r\in R_p(\beta)$.
It now follows from (4.1) and Lemma 4.4 that for all $r\in R_p(\beta)$,
\[ -\sum_{i=1}^N \phi_p^{(\mu)}(r_i)\geq w_\Delta\bigg(-\sum_{i=1}^N \phi_p^{(\mu)}(r_i){\bf a}_i\bigg)\geq w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+{\mathbb Z}A)\big). \]
Equation (4.3) therefore gives: if $r\in R_p(\beta)$ and $(1-p^a)r\in{\mathbb N}^N$, then
\begin{equation}
w_p(r)\geq \frac{p-1}{a}\sum_{\mu=0}^{a-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+{\mathbb Z}A)\big).
\end{equation}
Note that the right-hand side of (4.5) is independent of the choice of $a$: if $e$ is the smallest positive
integer such that $-\beta^{(e)}+{\mathbb Z}A = -\beta+{\mathbb Z}A$, then $e$ divides $a$ and
one has
\[ \sum_{\mu=0}^{a-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+{\mathbb Z}A)\big) = \frac{a}{e}
\sum_{\mu=0}^{e-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+{\mathbb Z}A)\big). \]
Equation (4.5) now shows that
\[ w_p(r)\geq \frac{p-1}{e}\sum_{\mu=0}^{e-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+
{\mathbb Z}A)\big) \]
for all $r\in R_p(\beta)$, which establishes the following result.
\begin{theorem}
With the above notation, we have
\[ w_p(R_p(\beta))\geq \frac{p-1}{e}\sum_{\mu=0}^{e-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+
{\mathbb Z}A)\big). \]
\end{theorem}
The following corollary is now an immediate consequence of Theorem~1.5.
\begin{corollary}
If $v\in R_p(\beta)$ satisfies
\begin{equation}
w_p(v) = \frac{p-1}{e}\sum_{\mu=0}^{e-1} w_\Delta\big(\sigma_{-\beta}^\circ\cap
(-\beta^{(\mu)}+{\mathbb Z}A)\big),
\end{equation}
then the series $\Phi_{v,\pi}(\lambda)$ has $p$-integral coefficients.
\end{corollary}
We make Corollary 4.7 more precise. Let $v\in R_p(\beta)$ with $(1-p^a)v\in{\mathbb N}^N$. Set
\[ \beta_\mu = \sum_{i=1}^N \phi_p^{(\mu)}(v_i){\bf a}_i. \]
Then $-\beta_\mu\in -\beta^{(\mu)}+{\mathbb Z}A$, so
\[ -\sum_{i=1}^N \phi_p^{(\mu)}(v_i)\geq w_\Delta(-\beta_\mu)\geq w_\Delta\big(\sigma_{-\beta}^\circ\cap
(-\beta^{(\mu)}+{\mathbb Z}A)\big), \]
where the first inequality follows from (4.1). It follows from (4.3) that (4.8) can hold only when
\[ -\sum_{i=1}^N \phi_p^{(\mu)}(v_i) = w_\Delta\big(\sigma_{-\beta}^\circ\cap
(-\beta^{(\mu)}+{\mathbb Z}A)\big)\quad\text{for $\mu=0,1,\dots,a-1$.} \]
\begin{corollary}
Let $v\in R_p(\beta)$ satisfy $(1-p^a)v\in{\mathbb N}^N$. If
\begin{equation}
-\sum_{i=1}^N \phi^{(\mu)}_p(v_i) = w_\Delta\big(\sigma_{-\beta}^\circ\cap(-\beta^{(\mu)}+{\mathbb Z}A)\big)
\end{equation}
for $\mu=0,1,\dots,a-1$, then the series $\Phi_{v,\pi}(\lambda)$ has $p$-integral coefficients.
\end{corollary}
Let $D\in{\mathbb N}$ satisfy $Dv\in{\mathbb Z}^N$. Note that, except for the factor $(p-1)$, the right-hand side of (4.8) is independent of $p$. And by Proposition~3.2 the right-hand side of (4.3), except for the factor $(1-p)$, depends only on $p\pmod{D}$. We thus have the following result.
\begin{proposition}
When (4.8) holds for one prime $p$, it also holds for all primes congruent to $p$ modulo~$D$. In this case, the series $\Phi_{v,\pi'}(\lambda)$ has $p'$-integral coefficients for all primes $p'$ congruent to $p$ modulo $D$ (where $\text{ord}_{p'}\:\pi' = 1/(p'-1)$).
\end{proposition}
\section{Classical hypergeometric series}
To illustrate the information contained in Corollary 4.9, we derive an integrality condition for certain classical hypergeometric series. For $z\in{\mathbb C}$, $k\in{\mathbb N}$, we use the Pochhammer notation $(z)_k$ for the increasing factorial $(z)_k = z(z+1)\cdots (z+k-1)$.
Let $c_{js}, d_{ks}\in{\mathbb N}$, $1\leq j\leq J$, $1\leq k\leq K$, $1\leq s\leq r$, and let
\begin{align}
C_j(x_1,\dots,x_r) &= \sum_{s=1}^r c_{js}x_s, \\
D_k(x_1,\dots,x_r) &= \sum_{s=1}^r d_{ks}x_s.
\end{align}
To avoid trivial cases, we assume that no $C_j$ or $D_k$ is identically zero. We also assume that for each $s$, some $c_{js}\neq 0$ or some $d_{ks}\neq 0$, i.~e., each variable $x_s$ appears in some $C_j$ or $D_k$ with nonzero coefficient. We make the hypothesis that
\begin{equation}
\sum_{j=1}^J C_j(x_1,\dots,x_r) = \sum_{k=1}^K D_k(x_1,\dots,x_r),
\end{equation}
i.~e.,
\begin{equation}
\sum_{j=1}^J c_{js} = \sum_{k=1}^K d_{ks} \quad\text{for $s=1,\dots,r$.}
\end{equation}
Let $\Theta = (\theta_1,\dots,\theta_J)$ and $\Sigma = (\sigma_1,\dots,\sigma_K)$ be sequences of $p$-integral rational numbers in the interval $(0,1]$. Consider the series
\begin{equation}
F(t_1,\dots,t_r) = \sum_{m_1,\dots,m_r = 0}^\infty \frac{(\theta_1)_{C_1(m)}\cdots(\theta_J)_{C_J(m)}}{(\sigma_1)_{D_1(m)}\cdots(\sigma_K)_{D_K(m)}} t_1^{m_1}\cdots t_r^{m_r}.
\end{equation}
These series are all $A$-hypergeometric (see \cite{AS2}) and include, for example, the four $r$-variable Lauricella series $F_A,F_B,F_C,F_D$.
For a nonnegative integer $\mu$ and a positive integer $h$ that does not divide the denominator of any $\theta_j$ or $\sigma_k$ we put
\[ \psi_h^{(\mu)}(\Theta) = (\psi_h^{(\mu)}(\theta_1),\dots,\psi_h^{(\mu)}(\theta_J)) \text{ and }
\psi_h^{(\mu)}(\Sigma) = (\psi_h^{(\mu)}(\sigma_1),\dots,\psi_h^{(\mu)}(\sigma_K)). \]
We will apply Corollary 4.9 to obtain a condition for $F(t)$ to have $p$-integral coefficients. Define a step function on ${\mathbb R}^r$ (analogous to that of Landau\cite{L})
\begin{multline*}
\xi(\Theta,\Sigma;x_1,\dots,x_r) = \\
\sum_{j=1}^J \lfloor 1-\theta_j + C_j(x_1,\dots,x_r)\rfloor - \sum_{k=1}^K \lfloor 1-\sigma_k + D_k(x_1,\dots,x_r)\rfloor.
\end{multline*}
\begin{theorem}
Let $D$ be a positive integer such that $D\theta_j,D\sigma_k\in{\mathbb N}$ for all $j,k$. Let $h$ be a positive integer prime to $D$ and choose $a$ such that $h^a\equiv 1\pmod{D}$.
The series (5.5) has $p$-integral coefficients for all primes $p\equiv h\pmod{D}$ if
\[ \xi\big(\psi_h^{(\mu)}(\Theta),\psi_h^{(\mu)}(\Sigma);x_1,\dots,x_r\big) \geq 0 \]
for $\mu=0,1,\dots,a-1$ and all $x_1,\dots,x_r\in [0,1)$.
\end{theorem}
\begin{corollary}
If the hypothesis of Theorem 5.6 is satisfied for positive integers $h_1,\dots,h_{\varphi(D)}$ representing all the residue classes modulo $D$ that are prime to $D$, then $F(t)$ has $p$-integral coefficients for all primes $p$ not dividing $D$.
\end{corollary}
{\bf Remark.} It follows from the proof of Delaygue-Rivoal-Roques\cite[Proposition 1]{DRR} that when the hypothesis of Corollary 5.7 is satisfied there exist positive integers $b_1,\dots,b_m$ such that $F(b_1t_1,\dots,b_mt_m)$ has integral coefficients.
We do not know whether the converse of Corollary 5.7 is true, but it does hold in some special cases. When $\theta_j=1$ for all $j$ and $\sigma_k=1$ for all $k$, the coefficients of $F(t)$ are ratios of products of factorials and one may take $a=1$. In this case Theorem 5.6 reduces to one direction of a result of Landau\cite{L}, who also proved the converse. The converse of Corollary 5.7 holds as well in the one-variable case when $J=K$ and
\begin{equation}
F(t) = \sum_{m=0}^\infty \frac{(\theta_1)_m\cdots(\theta_J)_m}{(\sigma_1)_m\cdots(\sigma_J)_m}t^m
\end{equation}
by a theorem of Christol\cite{C} (see also the discussion in \cite{DRR}).
Beukers and Heckman\cite[Theorem 4.8]{BH} have characterized those series (5.8) that are algebraic functions. It follows from their characterization that when (5.8) is an algebraic function the hypothesis of Corollary 5.7 is satisfied, so the coefficients of (5.8) are $p$-integral for all primes $p$ not dividing $D$. Using Theorem 1.5, we show later (Proposition~7.5) that the same conclusion holds more generally for the series (1.3) when it is an algebraic function.
The $A$-hypergeometric system associated to the series (5.5) is the one introduced in \cite{AS2}. Put $n= r+J+K$. Let ${\bf a}_1,\dots,{\bf a}_n$ be the standard unit basis vectors in ${\mathbb R}^n$ and for $s=1,\dots,r$ let
\[ {\bf a}_{n+s} = (0,\dots,0,1,0,\dots,0,c_{1s},\dots,c_{Js},-d_{1s},\dots,-d_{Ks}), \]
where the first $r$ coordinates have a $1$ in the $s$-th position and zeros elsewhere. Our hypothesis that some $c_{js}$ or some $d_{ks}$ is nonzero implies that ${\bf a}_1,\dots,{\bf a}_{n+r}$ are all distinct. Put $N=n+r$ and let $A = \{{\bf a}_i\}_{i=1}^{N}\subseteq {\mathbb Z}^n$. Let $\Delta$ be the convex hull of $A\cup\{{\bf 0}\}$.
Under (5.4) the elements of the set $A$ all lie on the hyperplane $\sum_{i=1}^n u_i =1$ in ${\mathbb R}^n$, so the weight function $w_\Delta$ of (4.1) is given by
\begin{equation}
w_\Delta(\gamma_1,\dots,\gamma_n) = \sum_{i=1}^n \gamma_i.
\end{equation}
We choose
\[ v=(-1,\dots,-1,-\theta_1,\dots,-\theta_J,\sigma_1-1,\dots,\sigma_K-1,0,\dots,0)\in {\mathbb Q}^N, \]
where $-1$ and $0$ are repeated $r$ times, so that
\[ \beta = \sum_{i=1}^N v_i{\bf a}_i = (-1,\dots,-1,-\theta_1,\dots,-\theta_J,\sigma_1-1,\dots,\sigma_K-1)\in{\mathbb R}^n. \]
As noted in \cite{AS2} one has
\begin{multline}
L = \{ l=(-m_1,\dots,-m_r,-C_1(m),\dots,-C_J(m), \\
D_1(m),\dots,D_K(m),m_1,\dots,m_r)\mid m=(m_1,\dots,m_r)\in{\mathbb Z}^r\}
\end{multline}
and
\begin{multline}
L_v = \{ l=(-m_1,\dots,-m_r,-C_1(m),\dots,-C_J(m), \\
D_1(m),\dots,D_K(m),m_1,\dots,m_r)\mid m=(m_1,\dots,m_r)\in{\mathbb N}^r\}.
\end{multline}
After a calculation one sees that the series (1.3) is
\begin{multline}
\Phi_v(\lambda) = \\
(\lambda_1\cdots\lambda_{r+J})^{-1}
\sum_{m_1,\dots,m_r=0}^\infty \frac{\displaystyle\prod_{j=1}^J (\theta_j)_{C_j(m)}}{\displaystyle\prod_{k=1}^K (\sigma_k)_{D_k(m)}} \frac{\displaystyle\prod_{k=1}^K \lambda_{r+J+k}^{D_k(m)}\prod_{s=1}^r \lambda_{n+s}^{m_s}}{\displaystyle\prod_{s=1}^r (-\lambda_s)^{m_s}\prod_{j=1}^J (-\lambda_{r+j})^{C_j(m)}}.
\end{multline}
Furthermore, the hypothesis (5.4) implies that we are in the nonconfluent case, so there is no normalizing factor of $\pi$ to include. We can thus apply Corollary~4.9 to get an integrality condition for the series (5.12) and hence an integrality condition for the series (5.5).
Using Lemma 3.1 and the definition of $v$, we see that the left-hand side of (4.10) is given by
\begin{equation}
r+\sum_{j=1}^J \psi^{(\mu)}_p(\theta_j) + \sum_{k=1}^K \psi^{(\mu)}_p(1-\sigma_k).
\end{equation}
Theorem 5.6 then follows from Corollary 4.9, Proposition 3.2, and the following proposition. Note that ${\mathbb Z}A = {\mathbb Z}^n$ in this case.
\begin{proposition}
For each $\mu$ one has
\begin{equation}
w_\Delta\big(\sigma^\circ_{-\beta}\cap(-\beta^{(\mu)}+{\mathbb Z}^n)\big)= r+\sum_{j=1}^J \psi^{(\mu)}_p(\theta_j) + \sum_{k=1}^K \psi^{(\mu)}_p(1-\sigma_k)
\end{equation}
if and only if
\begin{equation}
\xi\big(\psi_p^{(\mu)}(\Theta),\psi_p^{(\mu)}(\Sigma);x_1,\dots,x_r\big) \geq 0\quad
\text{for all $x_1,\dots,x_r\in[0,1)$.}
\end{equation}
\end{proposition}
\begin{proof}
It suffices to prove this when $\mu=0$, the other cases being analogous in view of Lemma~3.1. By (5.9) $w_\Delta(-\beta)$ equals the right-hand side of (5.15) when $\mu=0$, so we are reduced to proving that
\begin{equation}
w_\Delta\big(\sigma^\circ_{-\beta}\cap(-\beta+{\mathbb Z}^n)\big) = w_\Delta(-\beta)
\end{equation}
if and only if
\begin{equation}
\sum_{j=1}^J \lfloor 1-\theta_j + C_j(x_1,\dots,x_r)\rfloor - \sum_{k=1}^K \lfloor 1-\sigma_k + D_k(x_1,\dots,x_r)\rfloor\geq 0
\end{equation}
for all $x_1,\dots,x_r\in[0,1)$.
The set $A$ and the associated polytope $\Delta$ and cone $C(\Delta)$ were discussed in~\cite{AS2}. Since $\theta_j>0$ for all $j$, it follows from \cite[Lemma 2.5]{AS2} that $-\beta$ is an interior point of~$C(\Delta)$, hence $\sigma^\circ_{-\beta} = C(\Delta)^\circ$, the interior of~$C(\Delta)$. Equation (5.17) thus becomes
\[ w_\Delta\big(C(\Delta)^\circ\cap(-\beta+{\mathbb Z}^n)\big) = w_\Delta(-\beta). \]
Fix $u\in{\mathbb Z}^n$ such that $-\beta+u$ is an interior point of $C(\Delta)$ with
\begin{equation}
w_\Delta(-\beta+u) = w_\Delta\big(C(\Delta)^\circ\cap(-\beta+{\mathbb Z}^n)\big).
\end{equation}
Then (5.17) is equivalent to
\[ w_\Delta(-\beta+u) = w_\Delta(-\beta). \]
The inequality $w_\Delta(-\beta + u)\leq w_\Delta(-\beta)$ is trivial from (5.19), so we are reduced to showing that the inequality
\[ w_\Delta(-\beta+u)\geq w_\Delta(-\beta) \]
holds if and only if (5.18) holds for all $x_1,\dots,x_r\in[0,1)$. By (5.9) we have
\[ w_\Delta(-\beta+u) = w_\Delta(-\beta) + \sum_{i=1}^n u_i, \]
so this is equivalent to showing that, for $u$ satisfying (5.19),
\begin{equation}
\sum_{i=1}^n u_i\geq 0
\end{equation}
if and only if (5.18) holds for all $x_1,\dots,x_r\in[0,1)$.
By \cite[Lemma 2.4]{AS2} we may write
\begin{equation}
-\beta+u = \sum_{i=1}^N z_i{\bf a}_i
\end{equation}
with $z_i\geq 0$ for all $i$ and $z_i>0$ for $i=1,\dots,r+J$. Note that since the coordinates of each ${\bf a}_i$ sum to 1, Equation (5.9) implies that
\begin{equation}
w_\Delta(-\beta+u) = \sum_{i=1}^N z_i.
\end{equation}
We must have $z_i\leq 1$ for all $i$. For if some $z_{i_0}>1$, then
\begin{equation}
-\beta + u-{\bf a}_{i_0} = (z_{i_0}-1){\bf a}_ {i_0} + \sum_{\substack{i=1\\ i\neq i_0}}^N z_i{\bf a}_i
\end{equation}
is an element of $-\beta + {\mathbb Z}^n$ interior to $C(\Delta)$ since every ${\bf a}_i$ with $>0$ coefficient in (5.21) occurs with $>0$ coefficient in (5.23). But $w_\Delta(-\beta+u-{\bf a}_{i_0})<w_\Delta(-\beta+u)$ by (5.22), contradicting (5.19).
We claim that $z_i<1$ for $i=r+J+l$, $l=1,\dots,N$. If $z_{i_0}=1$ for some $i_0\in\{r+J+1,\dots,N\}$, then (5.23) becomes
\[ -\beta + u-{\bf a}_{i_0} = \sum_{\substack{i=1\\ i\neq i_0}}^N z_i{\bf a}_i. \]
But since $z_i>0$ for $i=1,\dots,r+J$, the point $-\beta + u-{\bf a}_{i_0}$ is an element of $-\beta + {\mathbb Z}^n$ interior to $C(\Delta)$ by \cite[Lemma 2.5]{AS2}, and again $w_\Delta(-\beta+u-{\bf a}_{i_0})<w_\Delta(-\beta+u)$, contradicting (5.19).
We have proved that in the representation (5.21) one has
\begin{equation}
z_i\in(0,1]\quad\text{for $i=1,\dots,r+J$}
\end{equation}
and
\begin{equation}
z_i\in [0,1)\quad\text{for $i=r+J+1,\dots,N$.}
\end{equation}
We now examine (5.21) coordinatewise. For $s=1,\dots,r$ we have
\begin{equation}
1+u_s = z_s + z_{n+s}.
\end{equation}
By (5.24) and (5.25) we have $z_s\in(0,1]$ and $z_{n+s}\in[0,1)$. Since $u_s\in{\mathbb Z}$, Equation~(5.26) implies
\begin{equation}
u_s=0 \quad\text{for $s=1,\dots,r$}
\end{equation}
and
\begin{equation}
z_s=1-z_{n+s} \quad\text{for $s=1,\dots,r$.}
\end{equation}
For $j=1,\dots,J$ we have
\begin{equation}
\theta_j+u_{r+j} = z_{r+j}+C_j(z_{n+1},\dots,z_{n+r}).
\end{equation}
Since $u_{r+j}\in{\mathbb Z}$ and $z_{r+j}\in(0,1]$ we have
\begin{equation}
z_{r+j} =
1 + \big\lfloor -\theta_j+ C_j(z_{n+1},\dots,z_{n+r})\big\rfloor - \big( -\theta_j + C_j(z_{n+1},\dots,z_{n+r})\big)
\end{equation}
for $j=1,\dots,J$, which implies by (5.29)
\begin{equation}
u_{r+j}= 1 + \big\lfloor -\theta_j + C_j(z_{n+1},\dots,z_{n+r})\big\rfloor\quad\text{for $j=1,\dots, J$.}
\end{equation}
For $k=1,\dots,K$ we have
\begin{equation}
1-\sigma_k + u_{r+J+k} = z_{r+J+k} -D_k(z_{n+1},\dots,z_{n+r}).
\end{equation}
Since $u_{r+J+k}\in{\mathbb Z}$ and $z_{r+J+k}\in[0,1)$ we have
\begin{equation}
z_{r+J+k} =
1-\sigma_k + D_k(z_{n+1},\dots,z_{n+r}) - \big\lfloor 1-\sigma_k +D_k(z_{n+1},\dots,z_{n+r})\big\rfloor
\end{equation}
for $k=1,\dots,K$, which implies by (5.32)
\begin{equation}
u_{r+J+k} = - \big\lfloor 1-\sigma_k+D_k(z_{n+1},\dots,z_{n+r})\big\rfloor\quad\text{for $k=1,\dots,K$.}
\end{equation}
Adding (5.27), (5.31), and (5.34) gives
\begin{equation}
\sum_{i=1}^n u_i =
\sum_{j=1}^J \big\lfloor 1-\theta_j + C_j(z_{n+1},\dots,z_{n+r})\big\rfloor - \sum_{k=1}^K
\big\lfloor 1-\sigma_k+D_k(z_{n+1},\dots,z_{n+r})\big\rfloor.
\end{equation}
This shows that if (5.18) holds for all $x_1,\dots,x_r\in[0,1)$, then (5.20) holds.
The argument can be reversed to show that (5.20) implies (5.18). If there exist $x_1,\dots,x_r\in[0,1)$ for which (5.18) fails, define $z_{n+s} = x_s$ for $s=1,\dots,r$, define $z_s$ for $s=1,\dots,r$ by (5.28), define $z_{r+j}$ for $j=1,\dots,J$ by (5.30), and define $z_{r+J+k}$ for $k=1,\dots,K$ by (5.33). The right-hand side of Equation (5.21) then defines an element of $C(\Delta)$. It equals the left-hand side of (5.21) with $u$ given by Equations (5.27), (5.31), and (5.34), so it is also an element of $-\beta + {\mathbb Z}^n$. Furthermore, (5.35) holds, so if (5.18) fails, then (5.20) fails also.
\end{proof}
The argument above closely follows the proof of \cite[Theorem 2.1(a)]{AS2}. One can also derive an analogue of \cite[Theorem 2.1(b)]{AS2} by following that proof.
Define ${\mathcal D}(\Theta,\Sigma)\subseteq [0,1)^r$ to be the subset where
\[ 1-\theta_j + C_j(x_1,\dots,x_r)\geq 1\quad\text{for some $j$} \]
or
\[ 1-\sigma_k + D_k(x_1,\dots,x_r)\geq 1\quad\text{for some $k$.} \]
Clearly $\xi(\Theta,\Sigma;x_1,\dots,x_r)=0$ on the complement of ${\mathcal D}(\Theta,\Sigma)$.
\begin{proposition}
The point $-\beta$ is the unique interior point of $C(\Delta)$ satisfying
\[ w_\Delta(-\beta) = w_\Delta\big(C(\Delta)^\circ\cap(-\beta + {\mathbb Z}^n)\big)\]
if and only if $\xi(\Theta,\Sigma;x_1,\dots,x_r)\geq 1$ for all $x\in {\mathcal D}(\Theta,\Sigma)$.
\end{proposition}
\section{Integral coefficients}
Corollary 4.9 leads to a simple condition for the series $\Phi_v(\lambda)$ to have integral coefficients. We suppose that we are in the nonconfluent case, so the set $A$ lies on a hyperplane $h(u) = 1$ in ${\mathbb R}^n$ and there is no normalizing factor of $\pi$. We also suppose that the coordinates of $v$ equal either 0 or $-1$, say,
\[ v=(-1,\dots,-1,0,\dots,0) \]
where $-1$ is repeated $M$ times and 0 is repeated $N-M$ times. We then have
\begin{multline*}
L_v = \{l=(l_1,\dots,l_N)\in L\mid\text{$l_i\leq 0$ for $i=1,\dots,M$} \\
\text{and $l_j\geq 0$ for $j=M+1,\dots,N$}\}.
\end{multline*}
The series (1.3) becomes
\begin{equation}
\Phi_v(\lambda) = (\lambda_1\cdots \lambda_M)^{-1}\sum_{l\in L_v} (-1)^{\sum_{i=1}^M l_i}\frac{\prod_{i=1}^M (-l_i)!}{\prod_{j=M+1}^N l_j!}\lambda^l.
\end{equation}
In this case we have $\beta=-\sum_{i=1}^M {\bf a}_i \in{\mathbb Z}A$ and Equation (4.10) becomes
\begin{equation}
M=w_\Delta\big(\sigma^\circ_{-\beta}\cap{\mathbb Z}A\big).
\end{equation}
Since $h({\bf a}_i)=1$ for $i=1,\dots,N$, it follows from (4.1) that $w_\Delta(\gamma) = h(\gamma)$ for $\gamma\in C(\Delta)$. In particular, $w_\Delta(-\beta) = M$. Equation (6.2) thus asserts that on the set $\sigma^\circ_{-\beta}\cap{\mathbb Z}A$, the function $w_\Delta (=h)$ assumes its minimum value at the point $-\beta$. Since $v$ lies in $R_p(\beta)$ for all primes $p$, we can apply Corollary 4.9 to obtain the following conclusion.
\begin{theorem}
With the above notation, if $w_\Delta\big(\sigma^\circ_{-\beta}\cap{\mathbb Z}A\big)=M$, then the series (6.1) has integral coefficients.
\end{theorem}
{\bf Example.} Let
\begin{equation}
f_\lambda=\sum_{i=1}^N \lambda_jx^{{\bf b}_j} \in K[\lambda_1,\dots,\lambda_N][x_0,\dots,x_n]
\end{equation}
be a homogeneous polynomial of degree $d$, where $\lambda_1,\dots,\lambda_N$ are indeterminates and $K$ is a field. Put ${\bf a}_j = ({\bf b}_j,1)$ and let $A=\{{\bf a}_i\}_{i=1}^N\subseteq{\mathbb Z}^{n+2}$. We make the hypothesis that $d$ divides $n+1$, say, $n+1=de$. Note that this generalizes the case of Calabi-Yau hypersurfaces in ${\mathbb P}^n$ where $d=n+1$. The points of $A$ all lie on the hyperplane $\sum_{i=0}^n u_i = du_{n+1}$ in ${\mathbb R}^{n+2}$. We suppose also that there exist monomials $x^{{\bf b}_1},\dots,x^{{\bf b}_e}$ such that
\[ x^{{\bf b}_1}\cdots x^{{\bf b}_e} = x_0\cdots x_n. \]
Take $\beta = (-1,\dots,-1,-e)$. Then $-\beta$ is the unique point of $\sigma_{-\beta}^\circ\cap{\mathbb Z}A$ where $w_\Delta$ assumes its minimum value, namely, $w_\Delta(-\beta) = e$. Set
\[ v=(-1,\dots,-1,0,\dots,0), \]
where $-1$ is repeated $e$ times and $0$ is repeated $N-e$ times. Then
\[ \sum_{i=1}^N v_i{\bf a}_i = -{\bf a}_1-\cdots-{\bf a}_e = (-1,\dots,-1,-e) = \beta. \]
Since (6.2) holds in this case (with $M=e$), Theorem 6.3 implies that the resulting series (6.1) has integral coefficients.
In \cite{AS1}, we used this fact to give a $p$-adic analytic formula for the (unique) reciprocal root of the zeta function of $f_\lambda = 0$ of minimal $p$-divisibility when $K$ is a finite field.
\section{Unboundedness of hypergeometric series}
Let $\Phi_{v,\pi}(\lambda)$ be as in (1.4). If this series does not have $p$-integral coefficients, then by Theorem 1.5 there exists $v'\in R_p(\beta)$ for which $w_p(v')<w_p(v)$. Equation~(4.3) then implies that if $(1-p^a)v,(1-p^a)v'\in {\mathbb N}^N$, then
\begin{equation}
\sum_{\mu=0}^{a-1} \sum_{i=1}^N \phi_p^{(\mu)}(v'_i)< \sum_{\mu=0}^{a-1} \sum_{i=1}^N \phi_p^{(\mu)}(v_i).
\end{equation}
\begin{proposition}
Let $v,v'\in R_p(\beta)$ with $w_p(v')<w_p(v)$ and let $D$ be a positive integer such that $Dv,Dv'\in{\mathbb N}^N$. Let $p'$ be any prime such that $p'\equiv p\pmod{D}$ and let $\pi'$ satisfy $\text{ord}_{p'}\:\pi' = 1/(p'-1)$. Then the series $\Phi_{v,\pi'}(\lambda)$ does not have $p'$-integral coefficients and is $p'$-adically unbounded.
\end{proposition}
\begin{proof}
Since $p\equiv p'\pmod{D}$ we have $(1-(p')^a)v,(1-(p')^a)v'\in {\mathbb N}^N$. By (7.1) and Proposition 3.2
\[ \sum_{\mu=0}^{a-1} \sum_{i=1}^N \phi_{p'}^{(\mu)}(v'_i)< \sum_{\mu=0}^{a-1} \sum_{i=1}^N \phi_{p'}^{(\mu)}(v_i). \]
Equation (4.3) then implies that $w_{p'}(v')<w_{p'}(v)$, so the assertion of the proposition follows from Theorem~1.5.
\end{proof}
Since there are infinitely many primes $p'$ such that $p'\equiv p\pmod{D}$, we get the following corollary.
\begin{corollary}
If the series $\Phi_{v,\pi}(\lambda)$ does not have $p$-integral coefficients, then there are infinitely many primes $p'$ for which the series $\Phi_{v,\pi'}(\lambda)$ does not have $p'$-integral coefficients and is $p'$-adically unbounded.
\end{corollary}
In the nonconfluent case, this corollary gives the following result.
\begin{corollary}
If the series $\Phi_v(\lambda)$ of (1.3) has $p$-integral coefficients for all but finitely many primes $p$, then it has $p$-integral coefficients for all primes $p$ for which $v$ is $p$-integral.
\end{corollary}
We apply this corollary to the case of hypergeometric series which are also algebraic functions.
\begin{proposition}
Suppose that $\Phi_v(\lambda)$ is algebraic over ${\mathbb Q}(\lambda)$ and the set $L_v$ lies in a cone in ${\mathbb R}^N$ with vertex at the origin. Then $\Phi_v(\lambda)$ has $p$-integral coefficients for all primes $p$ for which $v$ is $p$-integral.
\end{proposition}
\begin{proof}
By the multivariable version of Eisenstein's theorem on algebraic functions (see Sibuya-Sperber\cite[Theorem 37.2]{SS} and the remark below), there exist positive integers $b_0,b_1,\dots,b_N$ such that the series $b_0\Phi_v(b_1\lambda_1,\dots,b_N\lambda_N)$ has integral coefficients. This implies that $\Phi_v(\lambda)$ has $p$-integral coefficients for all but finitely many primes. The assertion of the proposition then follows from Corollary~7.4.
\end{proof}
{\bf Remark.} Since the reference \cite{SS} seems not to be readily available, we include a proof of the multivariable version of Eisenstein's Theorem in the next section.
{\bf Example.} Proposition 7.5 implies that if the series $F(t_1,\dots,t_r)$ defined in (5.5) is an algebraic function, then its coefficients are $p$-integral for all primes $p$ for which all $\theta_j$ and $\sigma_k$ are $p$-integral. In particular, if $\theta_j=1$ and $\sigma_k=1$ for all $j$ and $k$ (so that the coefficients of $F(t_1,\dots,t_r)$ are ratios of products of factorials) then these coefficients are integers. When $r=1$, this observation is due to Rodriguez-Villegas\cite{RV}.
\section{Multivariable Eisenstein's Theorem}
The notation of this section is independent of the rest of the paper. We leave it to the reader to prove (for example, by induction on $n$) that if $C$ is a cone in ${\mathbb R}^n$ generated by vectors in ${\mathbb Z}^n$ and with a vertex at the origin, then there exists a basis $E$ for ${\mathbb Z}^n$ such that every element of $C$ is a linear combination of elements of $E$ with nonnegative coefficients.
This implies that the ${\mathbb Q}$-algebra of formal power series of the form
\[ \sum_{u\in{\mathbb Z}^n\cap C} c_uX^u,\quad c_u\in{\mathbb Q}, \]
is a ${\mathbb Q}$-subalgebra of a power series ring isomorphic to ${\mathbb Q}[[X_1,\dots,X_n]]$. Thus for the version of Eisenstein's theorem used in the proof of Proposition~7.5, it suffices to prove it for ${\mathbb Q}[[X_1,\dots,X_n]]$.
We recapitulate an old result of Sibuya-Sperber\cite{SS} which gives an analogue in the setting of several variables of Eisenstein's classical one-variable result giving necessary conditions on the coefficients of a power series $g(X) \in \mathbb{Q}[[X]]$ in order for it to be an algebraic function over $\mathbb{Q}(X)$. This argument is taken entirely from the account in \cite{SS}. We repeat it here for convenience since the original publication may not be easily accessible.
We will proceed more generally and prove the following multivariable analogue of Eisenstein as a corollary.
\begin{theorem}
Let $g(X_1,\dots,X_n) = \sum_{|\alpha| \geq 0} C(\alpha)X^{\alpha} \in \mathbb{Q}[[X_1, \dots,X_n]]$ be algebraic over $\mathbb{Q}(X_1,\dots,X_n)$. Then there exists a positive integer $N$ such that
\begin{equation}
N^{|\alpha|}C(\alpha) \in {\mathbb Z}
\end{equation}
for all $\alpha$ with $|\alpha| >0$.
\end{theorem}
This result is an easy consequence of the following somewhat stronger result.
\begin{theorem}
Let $A$ be a commutative integral domain with identity of characteristic zero and let $K$ be its field of fractions. Let $f(X) = \sum_{m=0}^{\infty} c_m X^m \in K[[X]]$ be algebraic over $K(X)$. Then there exist positive integers $M$ and $M'$ with $M\geq M'$ such that the power series
\[ \tilde{f}(X) = \sum_{m=M+1}^{\infty} c_m X^{m-M'} \]
satisfies an equation of the form
\[ \rho \tilde{f}(X) = X F_0(X, \tilde{f}), \]
where $\rho \in A$ and $F_0 \in A[X,Z]$.
\end{theorem}
Before turning to the proof of Theorem 8.3 we show how Theorem 8.1 follows from Theorem 8.3.
\begin{proof}[Proof of Theorem 8.1]
Let $g(X_1,\dots,X_n)$ be as in the statement of Theorem 8.1. Let $\{ t, Y_1,\dots,Y_n \}$ be a collection of $n+1$ variables and set
\[ f(t,Y_1,\dots,Y_n) = g(tY_1,\dots,tY_n) = \sum_{m=0}^{\infty} \tilde{c}_m t^m, \]
where $\tilde{c}_m = \sum_{|\alpha| = m} C(\alpha) Y^{\alpha}$.
Since we assume $g$ to be algebraic over the field $\mathbb{Q}(X_1,\dots,X_n)$ it follows that $f$ is algebraic over $\mathbb{Q}(t, Y_1,\dots,Y_n)$. To apply Theorem~8.3, we take $A= \mathbb{Z}[Y_1,\dots,Y_n]$ with field of fractions $K = \mathbb{Q}(Y_1,\dots,Y_n)$. Then according to Theorem 8.3 there are positive integers $M$ and $M'$ with $M \geq M'$ such that $\tilde{f}(t) = \sum_{m=M+1}^{\infty} \tilde{c}_m t^{m-M'}$ satisfies
\begin{equation}
\rho \tilde{f}(t) = tF_0(t, \tilde{f}),
\end{equation}
where $\rho \in \mathbb{Z}[Y_1,\dots,Y_n]$ and $F_0 \in \mathbb{Z}[Y_1,\dots,Y_n][t,Z]$. We rewrite $\tilde{f} = \sum_{m=0}^{\infty} \gamma_m t^m$, (so $\gamma_{m} = \tilde{c}_{m+M'}$ for $m\geq M+1-M'$ and $\gamma_m = 0$ for $m=0,\dots,M-M'$) and $F_0(t, \tilde{f}) = \sum_{ (j,h) \in J} \mu_{j,h}t^j \tilde{f}^h$ with $\gamma_m \in \mathbb{Q}[Y_1,\dots,Y_n]$, $J$ a finite subset of ${\mathbb N}^2$, and $\mu_{j ,h} \in \mathbb{Z}[Y_1,\dots,Y_n]$.
The first possibly nonzero coefficient of $\tilde{f}$ is $\gamma_{M+1-M'}$, and from (8.4) we compute
\[ \rho\gamma_{M+1-M'} = \mu_{M-M',0}\in{\mathbb Z}[Y_1,\dots,Y_n]. \]
For general $m$, identifying coefficients of $t^m$ in Equation~(8.4) and multiplying by $\rho^{m-1}$ gives the recursions
\[ \rho^m \gamma_m = \sum_{\substack{(j,h)\in J\\ \sigma_1 + \cdots +\sigma_h = m-1-j}} \mu_{j,h} \rho^j (\rho^{\sigma_1} \gamma_{\sigma_1})(\rho^{\sigma_2} \gamma_{\sigma_2})\cdots(\rho^{\sigma_h} \gamma_{\sigma_h}). \]
This implies by induction on $m$ that $\rho^m \gamma_m \in \mathbb{Z}[Y_1,\dots,Y_n]$ for all $m>0$.
We write $\rho = \tau\hat{\rho}$, where $\hat{\rho} \in \mathbb{Z}[Y_1,\dots,Y_n]$ has Gauss content equal to 1 and $\tau \in \mathbb{Z}$. Also for each $m$ we write $\gamma_m = \nu_m\hat{\gamma}_m$ with $\hat{\gamma}_m \in \mathbb{Z}[Y_1,\dots,Y_n]$ having Gauss content equal to 1 and $\nu_m\in \mathbb{Q}$. Clearly then by unique factorization in $\mathbb{Z}[Y_1,\dots,Y_n]$ we have $\tau^m \nu_m \in \mathbb{Z}$. Thus $\tau^m$ clears denominators for $\gamma_m = \tilde{c}_{m+M'}$ for $m\geq M+1-M'$, so taking provisionally $N=\tau$ in (8.2) above we see that (8.2) holds for all $\alpha$ with $|\alpha| \geq M+1$. Multiplying $N$ by a suitable factor to clear denominators in the coefficients of the lower degree terms completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem 8.3]
We are given that $Z =f$ satisfies a non-trivial polynomial equation $F(X, Z) = 0$ where $F$ is a polynomial in $X$ and $Z$ with coefficients in $K$. We fix a choice of $F$ of minimal degree in $Z$. If we put $F_Z = {\partial F}/{\partial Z}$, then clearly $F_Z(X, f) \neq 0$. We may write
\[ F_Z(X, f(X))= X^{\mu} \phi(X), \]
where $\mu \in{\mathbb N}$ and $\phi(0) \neq 0$. Set $M= 2\mu +1$ and $M' = \mu + 1$. Let $f_M(X) = \sum_{m=0}^M c_mX^m$. Since $M > \mu$, we may write
\[ F_Z(X, f_M(X)) = X^{\mu} \phi_M(X), \]
where $\phi_M(X) \in K[X]$ and $\phi_M(0) \neq 0$. Also, since $F(X, f(X)) = 0$ and $f \equiv f_M \pmod{X^{M+1}}$, we have $F(X, f_M(X)) = X^{M+1} \psi_M(X)$, where $\psi_M(X) \in K[X]$. From the Taylor series we get
\[ F(X,f_M(X)+W) = F(X, f_M(X)) + F_Z(X, f_M(X))W + W^2G_M(X,W) \]
with $G_M \in K[X, W]$. But then for $\tilde{f}$ defined as in Theorem~8.3,
\begin{multline*}
F(X, f_M(X) + X^{M'} \tilde{f}) = \\
X^{M+1} \psi_M(X) + X^{\mu + M'}\phi_M(X) \tilde{f} + X^{2M'}\tilde{f}^2G_M(X, X^{M'}\tilde{f}).
\end{multline*}
Since $f = f_M(X) + X^{M'}\tilde{f},$ multiplying throughout by $X^{-(\mu +M')}$ gives
\begin{align*}
0 & = X^{-(\mu + M')} F(X, f_M + X^{M'}\tilde{f}) \\
& = \phi_M(X)\tilde{f} + X^{M+1-\mu - M'} \psi_M(X) + X^{M' - \mu}\tilde{f}^2G_M(X,X^{M'}\tilde{f})\\
& =\phi_M(0)\tilde{f} + X\bigg( X^{-1}(\phi_M(X) - \phi_M(0))\tilde{f} + X^{M - \mu - M'}\psi_M(X) \\
& \hspace*{.25in}+ X^{M' - \mu - 1}\tilde{f}^2G_M(X, X^{M'}\tilde{f}) \bigg).
\end{align*}
It follows then that
\begin{equation}
\phi_M(0)\tilde{f} = XF_1(X, \tilde{f})
\end{equation}
with (using the definitions of $M$ and $M'$ above)
\[ F_1(X, Z) = - \bigg( X^{-1}(\phi_M(X) - \phi_M(0))Z + \psi_M(X) + Z^2G_M(X, X^{\mu+1}Z) \bigg). \]
This shows that $F_1 \in K[X,Z]$. Multiplying by a suitable element of $A$ to clear fractions on both sides of (8.5) gives Theorem~8.3.
\end{proof} | {"config": "arxiv", "file": "1905.03235.tex"} |
\begin{document}
\title{Optimal Focusing for Monochromatic Scalar and
Electromagnetic Waves}
\author{\sc \small Jeffrey Rauch\thanks{University of Michigan, Ann Arbor 48109 MI, USA. email: [email protected]. Research partially supported by NSF under grant NSF DMS-0807600}
}
\date{}
\maketitle
\begin{abstract}For monochromatic solutions of D'Alembert's
wave equation and Maxwell's equations, we obtain
sharp bounds on the sup norm as a function of the
far field energy. The extremizer in the scalar case
is radial. In the case of Maxwell's equation, the electric
field maximizing the value at the origin
follows longitude
lines on the sphere at infinity.
In dimension $d=3$ the highest electric field
for Maxwell's equation is smaller by a factor 2/3
than the highest corresponding scalar waves.
The highest electric field densities on the balls $B_R(0)$ occur
as $R\to 0$. The density dips to half max at $R$
approximately equal to one third the wavelength.
The extremizing fields are
identical to those
that attain the maximum field intensity at the
origin.
\end{abstract}
{\bf Key words}: Maxwell equations, focusing,
energy density, extreme light initiative.
{\bf MSC2010 Classification}: Primary 35Q60, 35Q61, Secondary 35L40, 35P15, 35L25.
\section{Introduction.}
The problem we address is to find for fixed frequency $\omega/2\pi$,
the
monochromatic
solutions of the wave equation and of Maxwell's
equation which achieve the highest field values
at a point or more generally the greatest electrical energy in
ball
of fixed small radius. They are constrained by the
energy at $|x|\sim\infty$.
This leads to several
variational problems;
\vskip.1cm
$\bullet$ Maximize
the field strength at a point.
$\bullet$ For fixed $R$, maximize the energy
in a ball of radius $R$.
\vskip.1cm
A third problem is,
\vskip.1cm
$\bullet$ Find $R_{optimal}$ so that the
energy {\sl density} is largest.
\vskip.1cm
We show that the last is degenerate, the maximum
occuring at $R=0$.
There are experiments in course whose strategy
is to focus a number of coherent
high power laser beams on a small volume
to achieve very high energy densities. The problem was proposed to me by
G. Mourou because of his leadership role in the
European Extreme
Light Initiative. If by better focusing one
can reduce the size of the incoming lasers there would
be significant
benefits. Identification of the extrema
guides the deployment of the lasers.
With the experimental design as motivation the
maximization for the Maxwell equations is
interpreted as maximization of the energy in the
electric field $E$, ignoring the magnetic
contribution. Including the magnetic contribution
creates an analogous problem amenable to
the techniques introduced here.
As natural as these questions appear, I have been
unable to find previous work on them.
Study solutions of the
scalar wave equation and of Maxwell equations,
$$
v_{tt}\ -\ \Delta\,v \ =\ 0\,,
\qquad
E_t = \curl B,
\quad
B_t=-\curl E\,,
\quad
\dive E =\dive B = 0\,,
$$
for spatial dimensions $d\ge 2$. Units are chosen
so that the propagation speed is 1.
\begin{definition}
A solution $v$ of the wave equation is monochromatic
if it is of the form
\begin{equation}
\label{eq:mono}
v\ =\
\psi(t) \,
u(x),
\qquad
{\rm with}
\qquad
\psi^{\prime\prime} +\
\omega^2\,\psi \ =\ 0\,,
\quad
\omega\
>\
0\,.
\end{equation}
Monochromatic solutions
of Maxwell's equation are those of the form
\begin{equation}
\label{eq:mono2}
\psi(t) \,
\big(
E(x)\,,\,B(x)
\big)
\,,
\qquad
{\rm with }
\qquad
\psi^{\prime\prime} +\
\omega^2\,\psi \ =\ 0\,,
\quad
\omega\
> \
0\,.
\end{equation}
\end{definition}
They are generated by
\begin{equation}
\label{eq:monoplusminus}
e^{\pm i\omega t} \, u(x),
\qquad
{\rm and}
\qquad
e^{\pm i\omega t}\,\big(E(x)\,,\,B(x)\big)\,.
\end{equation}
Scaling $t,x\to \omega t,\omega x$ reduces the study to the case
$\omega=1$.
In that case, the {\bf reduced wave equations} are satisfied,
\begin{equation}\label{eq:reduced}
(\Delta +1)u(x)\ =\ 0\,,
\qquad
(\Delta +1)E(x)
\ =\
(\Delta +1)B(x)\ =\ 0\,.
\end{equation}
\vskip.2cm
{\bf Notation.} {\sl The absolute value sign $|\ |$ is used to denote
the modulus of complex numbers, the length of vectors in $\CC^d$,
surface area, and, volume.
}
Examples: $|S^{d-1}|$ and $|B_R(0)|$.
\begin{example}
\label{ex:planewaves}
The plane waves $e^{i(\pm\omega t + \xi x)}$
with $|\xi|=\omega$ is a monochromatic solution of the wave
equation. Its period and wavelength are equal to
$2\pi/\omega$. For Maxwell's equations the analogue
is $E=e^{i(\pm\omega t + \xi x)}\bfe$ with $\bfe\in \CC^d$
satisfying $\xi\cdot\bfe=0$ to guarantee the divergence free
condition.
\end{example}
The solutions that interest us tend to zero as $|x|\to \infty$.
\begin{example} When $d=3$,
$
u(x):=
\sin |x|/|x|
$
is a solution of the reduced wave equation
(see also Example \ref{ex:scalmax}). The corresponding
solutions of the wave equation is
$$
v\ =\
e^{\pm it}
\frac{\sin |x|}
{|x|}
\ =\
\frac{1}{2}
\Big(
\frac
{e^{i(\pm t+|x|) } }
{|x|}
\ -\
\frac
{ e^{ i(\pm t-|x|) } }
{|x|}
\Big)\,.
$$
For the plus sign,
the first term represents an incoming spherical wave
and the second
outgoing. To create such a solution it suffices
to generate the incoming wave. The outgoing wave with the change
of sign is then generated by that wave after it focuses at the
origin.
\end{example}
\begin{example}
Finite energy solutions of Maxwell's equations
are those for which $\int_{\RR^d} |E|^2 + |B|^2\,dx<\infty.$
They
satisfy
$$
\forall R>0,\qquad
\lim_{t\to \infty}
\
\int_{|x|\le R} |E(t,x)|^2 + |B(t,x)|^2\ dx
\ =\
0\,.
$$
Therefore,
the solution $(E,B)=0$ is the
only monochromatic solution of finite energy.
\end{example}
The solutions $(E(x),B(x))$ that
tend
to zero as $x\to \infty$ define
tempered distributions on $\RR^d$.
When $(E(x),B(x))$
is a tempered solution of
the reduced wave equation,
the Fourier Transforms
satisfy
$$
(1-|\xi|^2)\widehat E(\xi)
\ =\
(1-|\xi|^2) \widehat B(\xi) \ =\ 0.
$$
Therefore the support of $\widehat E$ is contained in the unit
sphere $S^{d-1}:=\big\{|\xi|=1\big\}$. Since $1-|\xi|^2$
has nonvanishing gradient on this set it follows that
the value of $\widehat E$ on a test function $\psi(\xi)$ is
determined by the restriction of $\psi$ to $S^{d-1}$.
Therefore there is a distribution $\bfe\in {\cal D^\prime}(\{|\xi|=1\})$
so that
\begin{equation}
\label{eq:E}
E(x)
\ :=\
\int_{|\xi|=1}
e^{ix\xi}
\
\bfe(\xi)\
d\sigma\,,
\end{equation}
where we use the usual abuse of notation indicating as an
integral the pairing of the distribution $\bfe$ with the
test function $e^{ix\xi}\big|_{|\xi|=1}$.
Conversely, every such expression
is a tempered vector valued solution of the
reduced wave equation.
If $\bfe$ is smooth, the principal of stationary phase
(see \S \ref{sec:competing})
shows that
as
$|x|\to \infty$,
\begin{equation}
\label{eq:stationary}
E(x) =
\frac{1/\sqrt{2\pi}}
{
|x|^{(d-1)/2}
}
\
\Big(
e^{-i | x |}\,\bfe((-x/|x|)
+
e^{i\pi(d-1)/4}\, e^{i|x|}\,
\bfe(x/|x|)
+ O(1/|x|)\Big)\,.
\end{equation}
The field in $O(|r|^{-(d-1)/2})$. In particular,
\begin{equation}
\label{eq:slowgrow}
\sup_{R\ge 1}\
R^{-1}\int_{|x|\le R}|E(x)|^2\ d\sigma
\ <\
\infty\,,\quad
{\rm and.}
\end{equation}
\begin{equation}
\label{eq:L2}
\lim_{R\to \infty}
\int_{R\le |x|\le 2R}
|E(x)|^2\ dx
\
=\
c_d\, \int_{|\xi|=1} |\bfe(\xi)|^2\ d\sigma\,.
\end{equation}
For a field defined by a distribution $\bfe$,
\eqref{eq:slowgrow} holds
if and only if
$\bfe\in L^2( S^{d-1} )$.
In
that case \eqref{eq:L2} holds and
stationary phase approximation holds in
an $L^2$ sense.
This is the class of solutions of the reduced wave
equation that we study. Equation \eqref{eq:L2}
shows that
$ \| \bfe \|_{L^2(S^{d-1})} $
is a natural measure
of the strength of the field at infinity.
The divergence free condition
in Maxwell's
equations is satisfied
if and only if $\xi\cdot \bfe(\xi)=0$ on $S^{d-1}$.
In that case,
the solutions
$e^{\pm it}E(x)$ of the
time dependent equation are
linear combinations of the plane waves in
Example \ref{ex:planewaves}.
Denote by $\bfH$ the closed subspace of
$\bfe\in L^2(S^{d-1};\CC^d)$ with $\xi\cdot\bfe=0$.
For $\bfe\in \bfH$ and $x=r\xi$ with $|\xi|=1$ and $r>>1$,
the solution $e^{it}E(x)$ of Maxwell's
equations
satisfies
$$
r^{ (d-1)/2 } \, E(x)
\ \approx\
\frac{1}
{\sqrt{2\pi} }
\Big(
e^{ i(t-r) } \,\bfe(-\xi)
\ +\
e^{ i\pi(d-1)/4 }\,e^{ i(t+r) } \, \bfe(\xi)
\Big)\,.
$$
In practice, the incoming wave
$$
\frac{1}
{ \sqrt{2\pi} }
\
\int_{ S^{d-1} }
e^{ i\pi(d-1)/4 }\
\frac{ e^{ i(t+r) } }
{ r^{ (d-1)/2 } } \ \bfe(\xi)\ d\sigma
$$
is generated at large $r$ and the monochromatic solution is
observed for $t>> 1$. The phase factor $e^{i\pi(d-1)/4 }$
correponds to the
phase shift from the focusing at the origin.
The first two variational problems
for Maxwell's equations
seek to maximize
$$
J_1(\bfe) \ :=\
|E(0)|^2\,,
\qquad
{\rm and}
\qquad
J_2(\bfe) \ :=\
\int_{|x|\le R} |E(x)|^2\ dx\,.
$$
among $\bfe\in \bfH$ with $\int_{S^{d-1}}|\bfe(\xi)|^2\,d\sigma=1$.
Theorems \ref{thm:scalarmaxpoint} and \ref{thm:maxpoint} compute
the maxima of $J_1$
in the scalar and electromagnetic cases.
The maximum in the scalar case and also the vector case
wiithout divergence free condition is $|S^{d-1}|$.
It is attained when and only when $\bfe$ is constant.
For the electromagnetic
case, $\xi\cdot \bfe(\xi)=0$ so the
constant densities are excluded. The maximum is achieved
at multiplies and rotations of the field $\ell(\xi)$
from the next definition.
\begin{definition}
\label{def:ell} For $\xi\in S^{d-1}$ denote by $\ell(\xi)$
the projection of the vector $(1,0,\dots,0)$
orthogonal to $\xi$,
\begin{equation}
\label{eq:defell}
\ell(\xi)
\ :=\
(1,0,\dots,0)
\ -\
\big(\xi\cdot(1,0,\dots,0)\big)\,\xi
\ =\
(1,0,\dots,0)
\ -\
\xi_1\,\xi\,.
\end{equation}
\end{definition}
$\ell$ is a vector field whose integral curves are the lines
of longitude connecting the pole $(-1,0,\cdots,0)$ to
opposite pole
$(1,0,\cdots ,0)$.
The maximum value of $J_1$ for electromagnetic
waves is smaller by $(d-1)/d$
than the extremum in the scalar case.
The same functions also
solve the $J_2$ problem
when $R$ is not too large.
The study of $J_1$ is reduced to an application of
the Cauchy-Schwartz inequality.
In \S \ref{sec:equiv}
the maximization
of $J_2$ is transformed to a
problem in spectral theory.
Maximizing $J_2$
is equivalent to finding the norm of
an operator. In the scalar case we call the operator
$L$. Finding the norm is equivalent to finding
the spectral radius of the self adjoint operator $L^*L$.
The operator $L^*L$ is compact and rotation invariant
on $L^2(S^{d-1})$. Its spectral theory is reduced
by the spaces of spherical harmonics of order $k$.
On the space of spherical harmonics
of degree $k$,
$L^*L$
is multiplication
by a constant
$
\Lambda_{d,k}(R)$
computed exactly in terms of Bessel functions
in
Theorem \ref{thm:lambdaofR}.
Theorem 7.1 shows that
in the scalar case $\Lambda_{d,0}(R)$ is
the largest for $R\le \pi/2$.
For the Maxwell problem, the corresponding operator
is
denoted $\bfL_M^*\bfL_M$. We do not know all its
eigenvalues. However two explicit eigenvalues
are $(2/3)(\Lambda_{d,0}(R)-\Lambda_{d,2}(R))$,
and, $\Lambda_{d,1}(R)$.
When $R\le \pi/2$,
Theorem 7.3 proves that the they are the largest and
second largest
eigenvalues. The proof uses the minimax principal.
The eigenfunctions
for the largest eigenvalue are rotates of multiplies of $\ell$.
For $R>\pi/2$
we derive
in \S 8
rigorous sufficient conditions guaranteeing
that the
same functions provide the extremizers.
The conditions involve the $\Lambda_{d,k}$.
To verify them we evaluate the integrals defining the
$\Lambda_{3,k}$ approximately.
By such evaluations we show that
for $d=3$ and $R\le 2.5$ the solutions maximizing
$J_1$
also maximize $J_2$.
The energy density is equal to the largest eigenvalue
divided by $|B_R(0)|$.
In the range $d=3, R\le 2.5$
in
both the scalar and electromagnetic case
this quantity is a decreasing
functions of $R$. This shows that
the third problem at the start, of finding the radius
with highest energy density is solved by $R=0$.
However the graph is fairly flat.
The density dips to about 1/2 its maximum
at about $R=2$ which is about a third the wavelength.
For
focusing of electromagnetic waves to a ball of
radius $R$ no larger than one third of a wavelength the optimal
strategy is to choose $\bfe(\xi)$ a multiple of a rotate of
$\ell(\xi)$. The extremizing electric fields
are as polarized as a divergence free field can be.
When $d=3$ formula \eqref{eq:stationary} and Example
\ref{ex:incoming}
show that the far field for this choice is equal up to rotations by
$$
c\
\frac{\sin|x|}{|x|}
\
\ell\Big(
\frac{ x }{ | x | }
\Big)\,.
$$
This field is cylindrically symmetric with axis of symmetry
along the $x_1$-axis.
The restriction of $\ell$ to the unit sphere is cylindrically symmetric
given by rotating the following figure about the horizontal axis.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{longitude.eps}
\end{center}
\end{figure}
For the problem of focusing a family of lasers,
this suggests using linearly polarized
sources concentrated near vertical equator and sparse near the
poles on the horizontal axis. In contrast, for scalar waves one should distribute
sources as uniformly as possible.
\vskip.2cm
{\bf Acknowlegements.} I thank G. Mourou for
proposing this problem.
I early conjectured that the constants and $\ell(\xi)$ were
the extremizers in the scalar and electromagnetic cases
respectively.
J. Szeftel, G. Allaire,
C. Sogge, P. G\'erard, and J. Schotland
provided both encouragement and help on the path
to the results presented here.
The meetings were in Paris, Pisa, and Lansing.
In Europe I was a guest at the Ecole
Normale Sup\'erieure, Universit\'e de Paris Nord, and,
Universit\`a di Pisa.
Sogge
and G\'erard were guests of the Centro De Giorgi.
Schotland and I were both guests
of the IMA and Michigan State University.
I thank all these individuals and institutions.
\section{Monochromatic waves.}
\subsection{Electromagnetic waves and their transforms.}
\begin{proposition}
\label{prop:divergencefree}
{\bf i.} $E$ given by \eqref{eq:E} satisfies
$
\dive E =0
$
if and only if
\begin{equation}
\label{eq:Fperp}
\bfe(\xi)\cdot \xi \ =\ 0,
\qquad
{\rm on}
\qquad
\{|\xi|=1\}\,.
\end{equation}
{\bf ii.}
If a monochromatic solution of the
Maxwell equations has electric field
given by \eqref{eq:monoplusminus} with $\omega=1$
and $E$ is given by \eqref{eq:E} then the magnetic
field is equal to $e^{ it}\,B(x)$ with
\begin{equation}
\label{eq:B}
B(x)
\ =\
-\
\int_{|\xi|=1}
e^{ix\xi}\
\xi\wedge \bfe(\xi)\
d\sigma\,,
\end{equation}
\end{proposition}
{\bf Proof.}
Differentiating \eqref{eq:E} yields,
$$
\dive E
\ =\
\int_{|\xi|=1}
e^{ix\xi}\
i\,\xi\cdot \bfe(\xi) d\sigma
\,,
\qquad
\curl E \ =\
\int_{|\xi|=1}
e^{ix\xi}\
i\,\xi\wedge \bfe(\xi)\
d\sigma\,.
$$
The first formula proves {\bf i.}
The Maxwell equations together with
\eqref{eq:monoplusminus}
yield
$$
-\,\curl E \ =\ B_t \ =\
\,i\, B\,.
$$
Therefore, the second formula proves {\bf ii.}
\qed
\vskip.2cm
\begin{remark}
The condition \eqref{eq:Fperp} asserts that
$\bfe(\xi)$ is tangent to the unit sphere.
Brouwer's Theorem asserts that if $\xi\mapsto\bfe(\xi)$ is continuous
then there must be a $\underline\xi$ where $\bfe(\underline\xi)=0$.
\end{remark}
\begin{example}. If $d=3$ and $E$ is given \eqref{eq:E}
with $\bfe(\xi)=\ell(\xi)$
then on $|\xi|=1$,
$$
\xi\wedge \ell(\xi)
\ =\
\xi\wedge\Big((1,0,0)\ -\ \xi_1\,\xi\Big)
\ =\
\xi\wedge(1,0,0)
\ =\
(0, \xi_3, -\xi_2)
$$
is the tangent field to latitude lines winding
around the $x_1$-axis. Since this is an odd
function, the magnetic field vanishes at the origin, $B(0)=0$.
\end{example}
\subsection{The competing solutions.}
\label{sec:competing}
First verify the stationary phase formula \eqref{eq:stationary}
from the appendix.
Consider $E$ given by \eqref{eq:E} with $\bfe\in C^\infty(\{|\xi|=1\})$.
For $x$ large, the integral \eqref{eq:E} has two
stationary points, $\xi=\pm x/|x|$. At $\xi=x/|x|$
parameterize the surface by coordinates in the tangent
plane at $x$ to find that
the phase
$x \xi$ has a strict maximum equal to $|x|$ and hessian
equal to the $-I_{(d-1)\times(d-1)}$. At $\xi =
-x/|x|$ the phase has a minimum with value $-|x|$ and hessian
equal to the identity.
The stationary phase method yields
\eqref{eq:stationary}.
The energy in the electric field
satisfies \eqref{eq:slowgrow} and \eqref{eq:L2}.
\begin{theorem}
Suppose that $E\in {\cal S}^\prime(\RR^d)$
is a tempered solution of the reduced wave
equation given by \eqref{eq:E} with $\bfe\in
{\cal D}^\prime(S^{d-1})$. Then,
\eqref{eq:slowgrow} holds
if and only if
$\bfe \in L^2(S^{d-1};\CC^d)$. In that case
\eqref{eq:L2} holds.
In addition the stationary phase
approximation holds in the sense that
as $R\to\infty$,
$$
\int_{R\le |x|\le 2R}
\Big|
E(x) -
\frac{1/\sqrt{2\pi}}
{
|x|^{(d-1)/2}
}
\
\Big(
e^{-i | x |}\,\bfe((-x/|x|)
+
e^{i\pi(d-1)/4}\, e^{i|x|}\,
\bfe(x/|x|)
\Big)
\Big|^2\,dx
= o(R)\,.
$$
\end{theorem}
{\bf Proof.} The first two assertions are consequences of
H\"ormander \cite{Hor} Theorems 7.1.27
and 7.1.28.
The Theorem 7.1.28 also implies that
$$
\sup_{R\ge 1}\ \frac{1}{R}
\int_{R\le |x|\le 2R}
\big|
E(x)
\big|^2\ dx
\ \le \
c(d) \, \int_{|\xi | = 1 }
\big|
\bfe(\xi)
\big|^2\ d\sigma\,.
$$
This estimate shows that to prove the third assertion
it suffices to prove it for the dense set of $\bfe\in C^\infty(S^{d-1})$.
In that case the result is a consequence of the stationary phase
formula \eqref{eq:stationary}.
\qed
\vskip.2cm
\begin{definition}
$\bfH$ is the closed subspace of $\bfe\in L^2(S^{d-1};\CC^d)$
consisting of $\bfe(\xi)$ so that $\xi\cdot\bfe(\xi)=0$.
Denote by $\Pi$ the orthogonal projection
of
$\bfe\in L^2(S^{d-1};\CC^d)$
on $\bfH$.
\end{definition}
The next example explains a connection
between the solutions of the reduced equation that
we consider and those satisfying the Sommerfeld
radiation conditions.
\begin{example} If $g\in {\cal E}^\prime(\RR^d:\CC^d)$
is a distribution with compact support, then there
are unique solutions of the reduced wave equation
$$
(\Delta + 1)E_{out}\ =\ g,
\qquad
({\rm resp.}\quad
(\Delta + 1)E_{in}\ =\ g)
$$
satsifying the outgoing (resp. incoming) radiation
conditions.
The difference $E:=E_{out}-E_{in}$ is a solution
of the homogeneous reduced wave equation.
The field $F:=e^{it}E(x)$ is the unique solution
of the initial value problem
$$
\Box F\ =\ 0,
\qquad
F\big|_{t=0}\ =\ g,
\quad
F_t\big|_{t=0}\ =\ ig\,.
$$
When $g\in L^2$
the formula $E(x)\delta(\tau-1)=c\int_{-\infty}^\infty e^{-i\tau t} F\,dt$
together with the solution formula for the Cauchy problem
imply that
\eqref{eq:slowgrow}
holds
(or see \cite{Hor} Theorem 14.3.4 showing that both incoming
and outgoing fiels satisfy \eqref{eq:slowgrow}). More generally,
the Fourier transforms in time of solutions of Maxwell's
equations with compactly supported divergence free
square integrable
initial data yield examples of monochromatic solutions in our class.
\end{example}
\subsection{Spherical symmetry is impossible.}
It is natural to think that focusing is maximized
if waves come in equally in all directions.
For the scalar wave equation that is the case.
However,
such waves do not exist
for Maxwell's equations.
Whatever is the definition of spherical symmetry,
such a field must satisfy the hypotheses
of the following theorem.
\begin{theorem} If $E(x)\in C^1(\RR^d)$ satisifies
$\dive E=0$ and for $x\ne 0$ the angular part
$$
E
\ -\
\bigg(E\cdot \frac{x}{ | x | }\bigg)\,\frac{x}{ | x | }
$$
has length that depends only on $|x|$ then
$E$ is identically equal to zero.
\end{theorem}
{\bf Proof.} The restriction of the angular part of $E$ to each sphere
$|x|=r$ is a $C^1$ vector field tangent to the sphere and of constant length.
Brouwer's Theorem asserts that there is a point $\ux$ on the sphere
where the tangent vector field vanishes. Therefore the constant length
is equal to zero and $E$ is radial.
Therefore in $x\ne 0$,
$$
E(x) = \phi(|x|)\, x
\,.
$$
Since $E\in C^1$ it follows that $\phi\in C^1(\{|x|>0\})$.
Compute for those $x$,
$$
\dive
E\ =\ \phi \, \dive x \ +\
(\nabla_x\phi) \cdot x
\ =\
d\,\phi \ +\ r\,\phi_r \,.
$$
Therefore in $x\ne 0$, $r\,\phi_r+d\,\phi=0$ so $\phi=c\, r^{-d}$.
Since $E$ is continuous at the origin it follows that $c=0$ so
$E=0$ in $x\ne 0$. By continuity, $E$ vanishes identically.
\qed
\section{Maximum field strengths.}
We
solve the variational problems associated to
the functional $J_1$ to yield sharp
pointwise bounds on monochromatic waves.
The fact that the bounds for electromagnetic
fields are smaller shows that
focusing effects are weaker.
The extremizing fields are first
characterized by their Fourier Transforms.
Explicit formulas in $x$-space
are given in
\S \ref{subsec:4.4}
\subsection{Scalar waves.}
\begin{theorem}
\label{thm:scalarmaxpoint}
If
$$
u(x) \ = \
\int_{|\xi|=1}
e^{ix\xi}\ f(\xi)\ d\sigma\,,
\qquad
f
\ \in\ L^2(S^{d-1} ),
$$
and $\ux\in \RR^d$,
then
$$
\big| u(\ux) \big|
\ \le \
|S^{d-1}|^{1/2}
\,
\big\| f \big\|_{L^2(S^{d-1})}
$$
with equality achieved if an only if $f$ is a scalar multiple of $e^{-i\ux\xi}$.
\end{theorem}
{\bf Proof.} The quantity to maximize is the $L^2(S^{d-1})$ scalar
product of $f$ with $e^{-ix\xi}$. The result is exactly the
Cauchy-Schwartz inequality.
\qed
\subsection{Electromagnetic waves.}
From Definition \ref{def:ell},
$\ell(\xi)$ is tangent to the longitude
lines on the unit sphere connecting the pole $(-1,0,\dots,0)$ to the
pole $(1,0,\dots,0)$. It is the gradient of the restriction of the
function $\xi_1$ to the unit sphere.
\begin{theorem}
\label{thm:maxpoint}
If $d\ge 2$ and
$$
E(x)=
\int_{|\xi|=1} e^{ix\xi}\ \bfe(\xi)\ d\sigma\,,
\qquad
\bfe\ \in\
\bfH\,.
$$
Then
\begin{equation}
\label{eq:Ebound}
| E(0) | \ \le\
\bigg(
\frac{d-1}{d}\
|S^{d-1}|
\bigg)^{1/2}\
\big\| \bfe \big\|_{L^2(S^{d-1})}\,.
\end{equation}
Equality holds if and only if $\bfe$ is equal to a constant
mulitple of a rotate of
$\ell(\xi)$.
\end{theorem}
{\bf Proof.} By homogeneity it suffices to consider
$\|\bfe\|_{L^2(S^{d-1})}=1$.
Rotation and multiplication by a complex
number of modulus one
reduces to the case $E(0)=|E(0)|(1,0,\dots,0)$
and $ |E(0) | = \int \bfe_1(\xi)\,d\sigma$.
Need to study,
$$
\sup\ \Big\{ \int \bfe_1(\xi)\, d\sigma\ :\
\ \xi.\bfe(\xi)=0, \ \ \int_{|\xi|=1} \|\bfe(\xi)\|^2\,d\sigma \ =\ 1\Big\}\,.
$$
The quantity to be maximized is
$$
\int_{| \xi|=1}
\bfe_1(\xi)\,d\sigma
\ =\
\big(
\bfe\,,\,
(1,0.\cdots,0)
\big)_{L^2(S^{d-1})}
$$
The constant function,
$(1,0.\cdots,0)$ does not belong to the subspace
$\bfH$. The projection theorem shows that the quantity is
maximized for $\bfe$ proportional to the projection of
$(1,0.\cdots,0)$ on $\bfH$. Equivalently,
using \eqref{eq:defell} together with $\bfe\cdot\xi=0$ yields
$$
\bfe_1
\ =\
\bfe\cdot (1,0,\dots,0)
\ =\
\bfe\cdot
\big(
\ell(\xi) \ +\ \xi_1\,\xi\big)
\ =\
\bfe\cdot \ell\,.
$$
\begin{equation}
\label{eq:inte1}
\int_{| \xi|=1}
\bfe_1(\xi)\,d\sigma
\ =\
\int_{| \xi|=1}
\bfe(\xi)\cdot \ell(\xi)\ d\sigma
\end{equation}
which is equal to the $\bfH$ scalar product of
$\bfe$ and $\ell$. Since one has the orthogonal
decomposition
$$
(1,0,\dots , 0)
\ =\
\ell(\xi) + \xi_1\,\xi,
\qquad
{\rm one\ has},
\qquad
1\ =\
|\ell(\xi)|^2 \ +\ \xi_1^2\,.
$$
The Cauchy-Schwartz inequality
shows that the quantity
\eqref{eq:inte1} is
\begin{equation}
\label{eq:CS}
\le \
\|\bfe \|_{L^2(S^{d-1})}
\,
\|\ell \|_{L^2(S^{d-1})}
\ =\
\|\bfe \|_{L^2(S^{d-1})}
\,
\bigg(\int_{|\xi|=1} (1-\xi_1^2)\ d\sigma\bigg)^{1/2}
.
\end{equation}
The extremum is attained uniquely
when $\bfe =z\,\ell/\|\ell\|$ with $|z|=1$.
To evaluate the
integral on the right of \eqref{eq:CS} compute,
$$
\int_{|\xi|=1} \xi_1^2\, d\sigma
\ =\
\int_{|\xi|=1} \xi_j^2\, d\sigma
\ =\
\frac{1}{d} \int_{|\xi|=1} \sum_j\xi_j^2\ d\sigma
\ =\
\frac{1}{d}\int_{|\xi|=1} 1\,d\sigma
\ =\
\frac{|S^{d-1}|}{d}\,.
$$
Therefore
$$
\int_{|\xi|=1} (1-\xi_1^2)\ d\sigma
\ =\
|S^{d-1}| \ -\
\frac{|S^{d-1}|}{d}
\ =\
|S^{d-1}| \
\frac{ d-1 } { d } \,.
$$
Together with \eqref{eq:CS} this proves
\eqref{eq:Ebound}.
\qed
\begin{remark} If one constrains $\bf e$ to have
support in a subset $\Omega$ then with $\chi$ denoting
the characteristic function of $\Omega$,
\begin{equation*}
\int_{|\xi|=1}
\bfe(\xi)\cdot\ell(\xi)\ d\sigma
=
\int_{|\xi|=1}
\bfe(\xi)\cdot\ell(\xi)\chi(\xi)\ d\sigma
\end{equation*}
and $E_1$ is maximized by the choice
$\bfe = \ell(\xi)\chi(\xi)$. In the extreme light initiative
$\Omega$ is a small number of disks distributed
around the equator $x_1=0$.
\end{remark}
\subsection{Derivative bounds.}
\label{sec:derivatives}
\begin{corollary} If $d\ge 2$ and $E$ satisfies
$$
E(x)=
\int_{|\xi|=1} e^{ix\xi}\ \bfe(\xi)\ d\sigma\,,
\qquad
\bfe\ \in \
\bfH\,,
$$
then for all
$\alpha\in \NN^d$ and $\ux\in \RR^d$,
\begin{equation}
\label{eq:DEbound}
\big|
\partial_x^\alpha E(\ux)
\big|
\ \le\
\bigg(
\frac{d-1}{d}\
|S^{d-1}|
\bigg)^{1/2}
\
\big\|\bfe\big\|_{L^2(S^{d-1})}
\,.
\end{equation}
\end{corollary}
{\bf Proof.} The case $\alpha=0$ follows from Theorem
\ref{thm:maxpoint} applied to
$$
\widetilde E(x)
\ :=\
E(x+\ux)
\ = \
\int_{|\xi|=1}
e^{ix\xi}
\
e^{i\ux\xi}\,
\bfe(\xi)\
d\sigma
\ :=\
\int_{|\xi|=1}
e^{ix\xi}
\
\widetilde \bfe(\xi)\
d\sigma
\,.
$$
Compute for $|\alpha|>0$
$$
\partial_x^\alpha E \ = \
\partial_x^\alpha
\int_{|\xi|=1}
e^{ix\xi}\, \bfe(\xi)\ d\sigma
\ =\
\int_{|\xi|=1}
e^{ix\xi}\
(i\xi)^\alpha\bfe(\xi)\ d\sigma
\,,
$$
which is of the same form as $E$ with
density $(i\xi)^\alpha\bfe(\xi)$ orthogonal
to $\xi$. Since $|\xi_j|\le 1$
it follows that $|\xi^\alpha|\le 1$ so
$
\|(i\xi)^\alpha\bfe\|_{L^2(S^{d-1})}
\le \| \bfe\|_{L^2(S^{d-1})}
$.
Therefore the general case
follows from the case $\alpha=0$.
\qed
\begin{remark} Derivative bounds for the scalar case
are derived in the same way. They lack the factor
$(d-1)/d$.
\end{remark}
\subsection{Formulas for the extremizing fields.}
\label{subsec:4.4}
The electric
field corresponding to the extremizing
density $\ell$ is explicitly calculated.
The computation relies on
relations between Bessel functions, spherical harmonics,
and, the Fourier Transform. These relations
are needed to analyse $J_2$.
Start from
identities in Stein-Weiss \cite{Stein}.
Their Fourier transform is defined on
page 2,
$$
\int f(x)\ e^{-i2\pi x\xi}\ dx,
\qquad
{\it n.b.} {\rm \ the \ } 2\pi{\rm\ in\ the \ exponent.}
$$
We will not follow this convention, so
adapt
their identities.
The Bessel function of order $k$ is
(page 153),
\begin{equation}
\label{eq:Jdef}
J_k(t) =
\frac{(t/2)^k}{\Gamma[(2k+1)/2]\ \Gamma(1/2)}\
\int_{-1}^1
e^{its}\
(1-s^2)^{(2k-1)/2}\
ds,
\ \
-1/2 <k\in \RR\,.
\end{equation}
Theorem 3.10 (page 158) is the following.
\begin{theorem}
If $x\in \RR^d$, $f= f(|x|)P(x)\in L^1(\RR^d)$ with $P$ a homogeneous
harmonic polynomial of degree $k$,
then $\int f(x)e^{-2\pi i x \xi}dx=F(|\xi|)P(\xi)$
with
$$
F(r) \ =\
2\pi\,
i^{-k}\,
r^{-(d+2k-2)/2}\,
\int_0^\infty
f(s)\
J_{(d+2k-2)/2}(2\pi r s)\
s^{(d+2k)/2}\
ds\,.
$$
\end{theorem}
This theorem is equivalent, by scaling and linear combination,
to the same formula with $f=\delta(r-1)$.
That case is the identity,
\begin{equation}
\label{eq:steinP}
\int_{|x|=1}
e^{-i2\pi x\xi}\
P(x)\
d\sigma
\ =\
2\pi\,
i^{-k}\,
|\xi|^{-(d+2k-2)/2}\,
J_{(d+2k-2)/2}(2\pi |\xi|)\
P(\xi).
\end{equation}
\begin{remark} {\bf i.} For $|\xi|\to \infty$, $J(|\xi|)=O(|\xi|^{-1/2})$,
and
$P(\xi)=O(|\xi|^k|)$ so the right hand side is
$O(|\xi|^{-(d-2)/2-1/2})= O(|\xi|^{-(d-1)/2})$
as required by the principle of stationary phase.
{\bf ii.}
For $|\xi|\to 0$, $J_{((d-2k-2)/2}(|\xi|) = O(|\xi|^{(d-2k-2)/2})$
so the right hand side of \eqref{eq:steinP} is $O(|\xi|^k)$.
The higher the order of $P$ the smaller is the Fourier
transform near the origin.
\end{remark}
To adapt to the Fourier transform without the $2\pi$ in
the exponent,
use the substitution $\eta=2\pi \xi$, $|\eta|=2\pi |\xi|$
to find,
\begin{equation*}
\label{eq:hormP}
\int_{|x|=1}
e^{-ix\eta}\
P(x)\
d\sigma
\ =\
2\pi\,
i^{-k}\,
(|\eta|/2\pi)^{-(d+2k-2)/2}\,
J_{(d+2k-2)/2}(|\eta|)\
P(\eta/2\pi)\,.
\end{equation*}
Using the homogeniety of $P$ yields
$$
\ =\
(2\pi)^{1-k}\,
i^{-k}\,
(|\eta|/2\pi)^{-(d+2k-2)/2}\,
J_{(d+2k-2)/2}(|\eta|)\
P(\eta)\,.
$$
The exponent of $2\pi$ is equal to
$d/2$
yielding,
\begin{equation}
\label{eq:harmonic2}
\int_{|x|=1}
e^{-ix\eta}\
P(x)\
d\sigma
=
(2\pi)^{d/2}\,
i^{-k}\,
|\eta|^{-(d+2k-2)/2}\,
J_{(d+2k-2)/2}(|\eta|)\
P(\eta).
\end{equation}
Since, $|\eta|^{-k}P(\eta)=P(\eta/|\eta|)$
\eqref{eq:harmonic2} equivalent to,
\begin{equation}
\label{eq:FThomog}
\int_{|x|=1}
e^{-ix\eta}\
P(x)\
d\sigma
=
(2\pi)^{d/2}\,
i^{-k}\,
|\eta|^{-(d-2)/2}\,
J_{(d+2k-2)/2}(|\eta|)\
P(\eta/|\eta|).
\end{equation}
The change of variable $\eta\mapsto -\eta$ yields,
\begin{equation}
\label{eq:FThomog2}
\int_{|x|=1}
e^{ix\eta}\
P(x)\
d\sigma
=
(2\pi)^{d/2}\,
(-i)^{-k}\,
|\eta|^{-(d-2)/2}\,
J_{(d+2k-2)/2}(|\eta|)\
P(\eta/|\eta|).
\end{equation}
Finally interchange the role of $x$ and $\eta$ to find,
\begin{equation}
\label{eq:FThomog3}
\int_{|\eta|=1}
e^{ix\eta}\
P(\eta)\
d\sigma
=
(2\pi)^{d/2}\,
(-i)^{-k}\,
|x|^{-(d-2)/2}\,
J_{(d+2k-2)/2}(|x|)\
P( x / |x| ).
\end{equation}
\begin{example}
\label{ex:scalmax}
The second most interesting example
is the extremizing field for the scalar case when $d=3$. In that case
$P=constant$ and there
is a short derivation. The function $u(x):=\int_{| \xi | =1 } e^{ix\xi}\,d\sigma$
is a radial solution of $(\Delta+1)u=0$. In $x\ne 0$ these
are spanned for $d=3$ by $e^{\pm ir}/r$. Smoothness
at the origin forces $u=A\sin r/r$. Since
$u(0)=|S^{d-1}|$
it follows that
$A=|S^{d-1}|$.
\end{example}
The most interesting case for us is $d=3$ and
the extremizing field $E$
with $\bfe(\xi)=\ell(\xi)$.
Since $\ell$ is not a spherical harmonic, the preceding
result does not apply directly.
To find the exact electric field, decompose $\ell$
in spherical harmonics.
\begin{lemma}
\label{lem:ellexp}
The
spherical harmonic expansion of the restriction of $\ell(\xi)$
to the unit sphere $S^{d-1}\subset \RR^d$ is
\begin{equation}
\label{eq:ellexp}
\ell(\xi)\ =\ \bigg(
\frac{d-1}{d} -
\sum_{j=2}^d
\frac{\xi_1^2-\xi_j^2}{d}
\,,\,
-\xi_1\xi_2
\,,\,
\cdots
\,,\,
-\xi_1\xi_d
\bigg)
\quad
{\rm on}
\quad
|\xi|=1\,.
\end{equation}
\end{lemma}
{\bf Proof of lemma.}
When $|\xi |=1$,
\begin{equation}
\label{eq:ellstep1}
\ell(\xi)
=
(1,0,\dots,0) -
\xi_1\xi
=
(1,-\xi_1\xi_2, \dots, -\xi_1\xi_d) - (\xi_1^2,0,\dots,0)
\,.
\end{equation}
The first summand has coordinates that are
spherical harmonics.
Decompose
$$
\xi_1^2
\ =\
\frac{\xi_1^2 + \dots + \xi_d^2}{d}
\ +\
\sum_{j=2}^d
\frac{\xi_1^2-\xi_j^2}{d}\,,
\qquad
\xi\in \RR^d\,,
$$
to find the expansion in spherical harmonics of the restriction of $\xi_1^2$
to the unit sphere
$\xi_1^2 + \dots + \xi_d^2=1$,
\begin{equation}
\label{eq:xi1square}
\xi_1^2
\ =\
\frac{1}{d}
\ +\
\sum_{j=2}^d
\frac{\xi_1^2-\xi_j^2}{d}
\qquad
{\rm on}
\qquad
\xi_1^2 + \cdots + \xi_d^2=1\,.
\end{equation}
Using \eqref{eq:xi1square} in \eqref{eq:ellstep1} proves
\eqref{eq:ellexp}.
\qed
\begin{example}
\label{ex:incoming} The stationary phase formula
\eqref{eq:stationary} applied to the extremizing
$\bfe=\ell(\xi)$ which is an even function yields
for $d=3$
$$
E(x)
=
\frac{-1}{\sqrt{2\pi} }\
\ell\Big(
\frac
{x}
{ |x| }\Big)\
\frac
{\sin |x|}
{ |x| }
\ +\ O(|x|^{-2})
\,.
$$
This is the sum of incoming
and outgoing waves with spherical wave fronts and each with
profile on large spheres proportional to $\ell(x/|x|)$. The desired
incoming wave is such an $\ell$-wave.
\end{example}
\section{Equivalent selfadjoint eigenvalue problems.}
\label{sec:equiv}
The section introduces eigenvalue problems equivalent
to the maximization of $J_2$.
\subsection{The eigenvalue problem for focusing scalar waves.}
\begin{definition} For $R>0$
define the compact linear operator $L:L^2(S^{d-1})\to L^2(B_R(0))$
by
$$
(Lf)(x)
\ :=\
\int_{ |\xi|=1} e^{ix\xi}\ f(\xi)\ d\sigma
\,.
$$
\end{definition}
The operator $L$ commutes with rotations. The adjoint
$L^*$ maps $L^2(B_R(0))\to L^2(S^{d-1})$.
\begin{proposition} The following four problems are equivalent.
{\bf i.} Maximize the functional $J_2$ on scalar monochromatic
waves.
{\bf ii.} Find $f\in L^2(S^{d-1})$ with $\|f\|_{L^2(S^{d-1} ) } =1$
so that $\|Lf\|_{B_R(0)}$ is largest.
{\bf iii.} Find
the norm of $L$.
{\bf iv.} Find the largest eigenvalue
of the positive compact self adjoint operator $L^*L$
on $L^2(S^{d-1})$.
\end{proposition}
{\bf Proof.} The equivalence of the first three follows from the
definitions.
The equivalence with the third follows from the identity
$$
\|Lf\|^2_{L^2(B_R)}
\ =\
(Lf\,,\, Lf)_{L^2(B_R)}
\ =\
(L^*Lf\,,\, f)_{L^2(S^{d-1})}
\,.
\eqno{
\qed}
$$
\vskip.2cm
\begin{definition}
Define a rank one operator
$$
L^2(S^{d-1};\CC)
\ \ni\
f
\ \to \
L_0f\ :=\
\int_{| \xi | = 1}
f(\xi)\ d\sigma
\ \in \ \CC
\,.
$$
\end{definition}
\begin{remark} {\bf i.} The problem of maximizing $J_1$ for scalar waves is
equivalent to finding the norm of $L_0$ and also finding
the largest eigenvalue of $L_0^*L_0$.
{\bf ii.} The same formula defines an operator from $L^2(S^{d-1})
\to L^2(B_R(0))$ mapping $f$ to a constant function. With only small
risk of confusion we
use the same symbol $L_0$ for that operator too.
\end{remark}
\begin{definition} The vector valued version of $L$ and $L_0$ are
defined by
$$
(\bfL \bfe)(x)
\ :=\
\int_{| \xi | = 1}
e^{ix\xi}\ \bfe(\xi)\ d\sigma\,,
\qquad
\bfe\in L^2(S^{d-1};\CC^d)\,,
$$
$$
\bfL_0\bfe\ :=\
\int_{| \xi | = 1}
\bfe(\xi)\ d\sigma\,,
\qquad
\bfe\in L^2(S^{d-1};\CC^d)\,.
$$
Denote by $\bfL_M$ and
$\bfL_{0,M}$ the
restriction
of $\bfL$ and $\bfL_0$ to $\bfH$.
\end{definition}
\begin{remark}
\label{rmk:LL}
{\bf i.} The problem of maximizing the
functional $J_1$ for monochromatic electromagnetic
waves is equivalent to finding the largest eigenvalue
of $\bfL_{0,M}^*\bfL_{0,M}$.
{\bf ii.} The problem of maximizing
the functional $J_2$ for
monochromatic electromagnetic
waves is equivalent to finding the
largest eigenvalue of
$\bfL_M^*\bfL_M$.
{\bf iii.} The operator $\bfL \Pi$ is equal to $\bfL_M$ on $\bfH$
and equal to zero on $\bfH^\perp$. Therefore
$(\bfL \Pi)^*(\bfL\Pi)$ is equal to $\bfL_M^*\bfL^{}_M$ on $\bfH$
and equal to zero on $\bfH^\perp$.
\end{remark}
\section{Exact eigenvalue computations.}
\subsection{The operators $L_0^*L_0$, $\bfL_0^*\bfL_0$ and $\bfL_{0,M}^*\bfL_{0,M}$.}
\begin{theorem}
\label{thm:Lzerospec} {\bf i.} The spectrum of $L_0^*L_0$ contains
one nonzero eigenvalue, $|S^{d-1}|$, with multiplicity one.
The eigenvectors are the constant functions.
{\bf ii.} The spectrum of the operator $\bfL_0^*\bfL_0$ contains
one nonzero eigenvalue,
$|S^{d-1}|$, with multiplicity $d$.
The eigenvectors are the $\CC^d$-valued constant functions.
{\bf iii.} The spectrum of $\bfL_{0,M}^*\bfL_{0,M}$
contains one nonzero
eigenvalue, $
|S^{d-1}|(d-1)
/d
$
with multiplicity $d$. The corresponding eigenspace
consists of
$\ell(\xi)$ and functions obtained by rotation and scalar
multiplication.
\end{theorem}
{\bf Proof iii.} Suppose that $f$ is an eigenfunction in $\bfH$
so that the norm of $\int f\ d\sigma$ is maximal.
Rotating $f$ yields a function in $\bfH$ with the same
$\|\bfL_0 f\|$ and with
$\bfL_0 f$ parallel to $(1,0,\dots,0)$. Therefore maximizing
$\bfL_0 f$ and maximizing
$$
\bfL_0 f \cdot (1,0,\dots\,,0)
$$
yield the same extreme value.
Since $f\in \bfH$,
$$
\int f_1 d\sigma
\ =\
\int f\cdot (1,0,\dots,0))\ d\sigma
\ =\
\int f\cdot \ell\ d\sigma\,.
$$
The extreme value is attained for $f$ parallel to $\ell$.
This shows that $\ell$ is an eigenfunction corresponding
to the largest eigenvalue.
Rotating and taking scalar multiples yields a complex
eigenspace of dimension $d$.
Since ${\rm rank}\,\bfL_0=d$ these are all the eigenfunctions.
If $\|f\|_{L^2(S^{d-1} ) } =1$, then the maximization
of $J_0$ shows that
$|\bfL_{0,M}f|^2 =
|S^{d-1}|(d-1)
/d
$
proving the formula for the eigenvalue.
\qed
\subsection{The eigenfunctions and eigenvalues of $L^*L$ and $\bfL^*\bfL$.}
\begin{theorem}
\label{thm:lambdaofR}
In dimension $d$, the spherical harmonics of order
$k$
are eigenfunctions of $L^*L$
with eigenvalue
\begin{equation}
\label{eq:Lambdadef}
\Lambda_{d,k}(R)\ :=\
(2\pi)^{d/2}\,
|S^{d-1}|
\
\int_0^R
r
\
\big[
J_{(d+2k-2)/2}(r)
\big]^2
\ dr\,.
\end{equation}
\end{theorem}
\begin{remark}
From the point of view of focusing of
energy
into balls, all spherical harmonics of the same order
are equivalent.
\end{remark}
{\bf Proof.} If $P$ is a homogeneous harmonic
polynomial of degree $k$ formula
\eqref{eq:FThomog3}
shows that
$$
(L\,P)(x) \ =\
\phi_{d,k}( | x | )\
P(x),
$$
defining the function $\phi$.
The operator $L^*$ is an integral operator from
$L^2(B_R)\to L^2(S^{d-1})$ with kernel
$e^{-ix\xi}$. Therefore
$$
L^*L(P)
\ =\
\int_{|x|\le R}
e^{-ix\xi}
\
\phi_{d,k}( | x | )\
P(x)
\
dx\,.
$$
Introduce polar coordinates $x=ry$ with $|y|=1$ to
find
$$
L^*L(P)
\ =\
|S^{d-1}|\
\int_0^R\
\int_{|y|=1}
r^{d-1}\
e^{-iry\xi}\
\phi_{d,k}( r )\
r^k\ P(y)
\
d\sigma(y)\, dr\,.
$$
Formula
\eqref{eq:FThomog3} shows that
$$
\int_{|y|=1}
e^{-iry\xi}\
P(y)
\
d\sigma(y)\
\ =\
\phi_{d,k}(r)\
P(-r\,\xi)
\ =\
(-r)^k\,
\phi_{d,k}(r)\,
P(\xi)\,.
$$
Therefore
$$
L^*L(P)
\ =\
\int_0^R
(-1)^k\,
|S^{d-1}|\
r^{d+2k-1}\
\phi_{d,k}(r)^2\
dr\ P(\xi)\,.
$$
This proves that the harmonic polynomial are
eigenvectors with eigenvalue depending only
on $d$ and $k$.
The formula for the eigenvalue follows on noting
that the eigenvalue is equal to the square of
the $L^2(B_R(0))$ norm of
\begin{equation}
\label{eq:defF}
F(x):=
\int_{|\eta|=1}
\
e^{ix\eta}
\
P(\eta)
\
d\sigma\,,
\qquad
\int_{|\eta|=1}
\big|P(\eta)\big|^2
\
d\sigma =1.
\end{equation}
Using polar coordinates and \eqref{eq:FThomog3} yields,
\begin{equation}
\label{eq:concentration}
\begin{aligned}
\big\|
F\big\|_{B_R(0)}^2
\ &=\
(2\pi)^{d/2}\,
|S^{d-1}|
\
\int_0^R
r^{-(d-2)}
\
\big[J_{(d+2k-2)/2}(r)\big]^2
\ r^{d-1}
\ dr
\cr
\ &=\
(2\pi)^{d/2}\,
|S^{d-1}|
\
\int_0^R
r
\
\big[
J_{(d+2k-2)/2}(r)
\big]^2
\ dr
\,.
\end{aligned}
\end{equation}
This proves \eqref{eq:Lambdadef}.
\qed
\vskip.2cm
The spectral decomposition of $\bfL$ is nearly identical
to that of $L$. The next result is elementary.
\begin{corollary}
\label{cor:bfLspec} The eigenvalues of $\bfL^*\bfL$ are
the same as the eigenvalues of $L^*L$. The eigenspaces
consists of vector valued functions each of whose components
belongs to the corresponding eigenspace of $L^*L$.
\end{corollary}
\subsection{Some eigenfunctions and eigenvalues of $\bfL_M^*\bfL^{}_M$.}
The situation for $\bfL^*_M\bfL^{}_{M}$ is more subtle.
Our first two results show that there are eigenfunctions
intimately related to the eigenvalus $\Lambda_{d,1}(R)$
and $\Lambda_{d,0}(R)$.
\begin{theorem}
\label{thm:H1} The $d$ dimensional space of functions
$\bfe(\xi):=\zeta\wedge\xi$ with $\zeta\in \CC^d\setminus 0$
are eigenfunctions of $\bfL_M^*\bfL^{}_M$. The eigenvalue is
$\Lambda_{d,1}(R)$.
\end{theorem}
\begin{remark} These $\bfe(\xi)$ are the vector valued
spherical harmonics of degree 1 that belong to $\bfH$, that is,
that satisfy $\xi\cdot\bfe(\xi)=0$.
\end{remark}
{\bf Proof.} Follows from
$\bfL \bfe=\Lambda_{d,1}(R)\,\bfe$
and $\bfe\in \bfH$.
\qed
\vskip.2cm
Though the constant functions which are eigenvectors
of $\bfL$ do not belong to $\bfH$,
their projection on
$\bfH$ yield eigenvectors of $\bfL_M^*\bfL^{}_M$.
\begin{theorem}
\label{thm:LMspec} The $d$ dimensional space
consisting of scalar multiples of rotates of
$\ell(\xi)$ consists of eigenfunctions of $\bfL_M^*\bfL^{}_M$
with eigenvalue equal to
\begin{equation}
\label{eq:LMevalue}
\big(\Lambda_{d,0}(R)-\Lambda_{d,2}(R)\big)\
\frac{d-1}{d}
\,.
\end{equation}
\end{theorem}
{\bf Proof of theorem.} Use \eqref{eq:ellexp}.
Since the spherical harmonics are eigenfunctions of
$\bfL^*\bfL$ one has, suppressing the $R$ dependence of $\Lambda$,
\begin{equation*}
\label{LstarLell}
\bfL^*\bfL\,\ell
\ =\
\Big(
\Lambda_{d,0}\frac{d-1}{d} -
\Lambda_{d,2}\sum_{j=2}^d
\frac{\xi_1^2-\xi_j^2}{d}
\,,\,
-\Lambda_{d,2}\,\xi_1\xi_2
\,,\,
\cdots
\,,\,
-\Lambda_{d,2}\,\xi_1\xi_d
\Big).
\end{equation*}
Multiply \eqref{eq:ellexp} by $\Lambda_{d,2}$ to find
on $\xi_1^2+\cdots +\xi_d^2=1$,
$$
\Lambda_{d,2}\,
\ell\ =\
\bigg(
\Lambda_{d,2}\,\frac{d-1}{d} -
\Lambda_{d,2}\, \sum_{j=2}^d
\frac{\xi_1^2-\xi_j^2}{d}
\,,\,
- \Lambda_{d,2}\,\xi_1\xi_2
\,,\,
\cdots
\,,\,
- \Lambda_{d,2}\,\xi_1\xi_d
\bigg)
\,.
$$
Subtract
from the preceding identity to find,
$$
\bfL^*\,\bfL
\,
\ell
\ =\
\Big( \Lambda_{d,0}-\Lambda_{d,2}\Big)\frac{d-1}{d}
\big(1,0,\dots,0\big)
\,.
$$
Projecting perpendicular to $\xi$
using $\Pi\,(1,0,\cdots,0)= \ell$ yields
$$
\Pi\,
\bfL^*\,\bfL\,\Pi\,\ell
\ =\
\Pi\,
\bfL^*\bfL\, \ell
\ =\
\Big( \Lambda_{d,0}-\Lambda_{d,2}\Big)
\frac{d-1}{d}
\
\ell
\,.
$$
This proves that $\ell$ is an eigenfunction of
$(\bfL\Pi)^*(\bfL\Pi)$
with eigenvalue $\big(\Lambda_{d,0}-\Lambda_{d,2}\big)(d-1)/d$.
Remark \ref{rmk:LL}.iii shows that it is an eigenfunction
of $\bfL_M^*\bfL^{}_M$ with the same eigenvalue.
By rotation invariance the same is true of all scalar multiples of
rotates of $\ell$. They form a $d$ dimensional vector space.
spanned by the projections tangent to the unit sphere of the
unit vectors along the coordinate axes.
\qed
\vskip.2cm
As in Theorem \ref{thm:H1}
if one defines $\bfH_k$ to consist of spherical
harmonics of degree $k$ that belong to $\bfH$, then
$\bfH_k$
are orthogonal eigenspaces of $\bfL^*_M\bfL^{}_M$
with eigenvalue $\Lambda_{d,k}(R)$.
\begin{example}
In $\RR^2$ the homogeneous $\RR^2$ valued
polynomials of degree two
whose radial components vanish
are spanned by $(-x_1x_2, x_1^2)$
and $(x_2^2, -x_1x_2)$.
There are no harmonic functions is their span
proving that
when $d=2$, $\bfH_2=0$. It is clear that
$\bfH_0=0$.
Though there are a substantial number of eigenvectors
of $\bfL_M^*\bfL^{}_M$ accounted for by the $\bfH_k$ they are
far from the whole story.
\end{example}
\section{Spectral asymptotics.}
\subsection{Behavior of the $ \Lambda_{d,k}(R)$.}
\begin{proposition}
\label{prop:Lambdaasym}
{\bf i.} As $R\to 0$,
$\ \ \Lambda_{d,k}(R)=O(R^{d+2k})$.
{\bf ii.} As $R\to 0$, $\ \ \Lambda_{d,0}(R) =
| S^{d-1}|\,|B_R(0)|(1+O(R))$.
{\bf iii.} $\lim_{k\to \infty} \Lambda_{d,k}(R)\, =\, 0\ $
uniformly on compact sets of $R$.
\end{proposition}
{\bf Proof.} {\bf i.} Formula \eqref{eq:Jdef} shows that
$J_k(t)=O(t^k)$ as $t\to 0$. Assertion {\bf i}
then follows from \eqref{eq:Lambdadef}.
{\bf ii.} By definition, $\Lambda_{d,0}(R)$ is the square
of the
$L^2(B_R)$ norm of $\int_{| \xi | =1} e^{ix\xi}\,f(\xi)\,d\sigma$
for $f$ a constant function of norm 1. Take
$f=|S^{d-1}|^{-1/2}$.
For $R$ small $e^{ix\xi} = 1 + O(R)$ so
$$
\int_{| \xi | =1} e^{ix\xi}\,f(\xi)\,d\sigma
=
\int_{| \xi | =1} (1+O(R))\, |S^{d-1}|^{-1/2}\,d\sigma
=
|S^{d-1}|^{1/2}\big(1+O(R)\big)\,.
$$
Squaring and integrating over $B_R$ proves {\bf ii}.
{\bf iii.} As $k\to \infty$,
the prefactors
in the formula for $J_k$ tend to zero uniformly since
$\Gamma([(2k+1)/2]$ dominates.
The integral in the definition tends to zero
uniformly by Lebesgue's Dominated Convergence Theorem.
\qed
\subsection{Small $R$ asymptotics of the largest eigenvalues.}
\label{sec:smallR}
Each of the operators $L,\bfL,$ and $\bfL_M$ has
integral kernel $e^{ix\xi}$. They differ in the Hilbert
space on which they act. For $x$ small,
$e^{ix\xi}\approx 1$ showing that for $R$ small
the three operators are approximated by
$L_0, \bfL_0,$ and $\bfL_{0,M}$ respectively.
We know
the
exact spectral decomposition of the
approximating operators.
Each has exactly one nonzero eigenvalue.
In performing the approximation some care must
be exercised since the operator to be approximated
has norm $O(R^{d/2})$ tending to zero as $R\to 0$.
\begin{proposition}
\label{prop:approxOp}
{\bf i.} Each of the operators $L$, $\bfL$, and $\bfL_M$
has norm no larger than
$(|B_R(0)|\, |S^{d-1}|)^{1/2}$.
{\bf ii} Each of the differences $L-L_0$,
$\bfL-\bfL_0$, and $\bfL_M-\bfL_{0,M}$ has norm
no larger than
\begin{equation}
\label{eq:upbound}
\frac{
|S^{d-1}|\,
R^{(d+2)/2}
}
{
(d+2)^{1/2}
}
\,.
\end{equation}
\end{proposition}
{\bf Proof.} {\bf i.}
Treat the case of $\bfL$. For $\| \bfe\| =1$, the Cauchy-Schwartz
inequality implies that for each $x$, $\|\bfL \bfe(x)\|_{\CC^d}^2
\le |S^{d-1}|$. Integrating over the ball of radius $R$
proves {\bf i}.
{\bf ii}. The Cauchy-Schwarz inequality estimates the difference by
$$
\begin{aligned}
\big|
(\bfL-\bfL_0)\bfe \big|
&=
\Big|
\int_{|\xi|=1}
\big(
e^{ix\xi}-1
\big)
\,
\bfe(\xi)
\,
d\sigma
\Big|
\le
\int_{|\xi|=1}
|x| \, | \bfe |\, d\sigma
\cr
&\le
|x| \, |S^{d-1}|^{1/2}\, \| \bfe \|_{L^2(S^{d-1})}
\,.
\end{aligned}
$$
For $ \bfe $ of norm one this yields
$$
\begin{aligned}
\|
(\bfL -\bfL_0) \bfe
&
\|_{L^2(B_R(0))}^2
\ \le\
|S^{d-1}|\,
\int_{B_R}
|x|^2\,dx\cr
\ &=\
|S^{d-1}|\,
\int_{|\omega|=1}
\int_0^R
r^2\,r^{d-1}\, dr\, d\sigma(\omega)
\ =\
|S^{d-1}|^2\
\frac{R^{d+2}}{d+2}\,,
\end{aligned}
$$
completing the proof.
\qed
\begin{theorem}
\label{thm:smallR1}
{\bf i.} For each $d$ there is an $R_L(d)>0$ so that
for $0\le R<R_L(d)$ the eigenvalue
$\Lambda_{d,0}(R)$
is the largest eigenvalue of $L^*L$.
It has multiplicity one. The eigenfunctions are
constants.
{\bf ii.} For $0\le R<R_L(d)$ the eigenvalue
$\Lambda_{d,0}(R)$
is the largest eigenvalue of $\bfL^*\bfL$.
It has multiplicity d. The eigenfunctions are
constant vectors.
{\bf iii.} For each $d$ there is an $R_M(d)>0$
so that
for $0\le R<R_M(d)$ the eigenvalue
\begin{equation}
\label{eq:evalbehave}
\big(\Lambda_{d,0}(R)-\Lambda_{d,2}(R)\big)
\frac{d-1}{d}
\end{equation}
is the largest eigenvalue of $\bfL_M^*\bfL^{}_M$.
It has multiplicity $d$. The eigenfunctions are
rotates of constant multiples of $\ell$.
{\bf iv.} In all three cases, the other eigenvalues are
$
O(
R^{
d+1
}
)
$.
\end{theorem}
{\bf Proof.} We prove {\bf iii} and {\bf iv}
for the operator $\bfL_M^*\bfL^{}_M$.
Proposition
\ref{prop:approxOp} implies that
the compact self adjoint operators
$\bfL_M^*\bfL^{}_M$ and
$\bfL_{0,M}^*\bfL^{}_{0,M}$
differ by $O(R^{d+1}
)$ in norm.
Part {\bf iii} of
Theorem \ref{thm:Lzerospec}
shows that the spectrum of
$\bfL_{0,M}^*\bfL^{}_{0,M}$
contains one positive eigenvalue,
$\lambda_+:=|B_R(0)|\,|S^{d-1}|(d-1)/d$.
The factor $|B_R(0)|$ arises because
$\bfL_{0,M}$ in the present context is viewed
as an operator with values in the functions
on $B_R(0)\subset\RR^d$.
The eigenfunctions are scalar multiplies of rotates
of $\ell$. The rest of the spectrum
is
the eigenvalue $0$.
It follows that the spectrum of
$\bfL_{M}^*\bfL^{}_{M}$ lies in the
union of disks of radius $O(R^{
d+1}
)$
centered at zero and
$\lambda_+$.
For $R$ small these disks are disjoint
and the eigenspace associated to
the disk about
$\lambda_+$
has dimension $d$.
Theorem \ref{thm:LMspec} shows that
the eigenfunctions of $\bfL_{0,M}^*\bfL^{}_{0,M}$
with eigenvalue $\lambda_+$
are
eigenfunctions of $\bfL_{M}^*\bfL^{}_{M}$.
The eigenvalue is given by
\eqref{eq:LMevalue}.
It follows that for $R$ small, the scalar multiples of rotates
of $\ell$ is an eigenspace of $\bfL_M^*\bfL^{}_M$
of dimension $d$ and eigenvalue in the disk
about
$\lambda_+$.
This completes the proof of {\bf iii}.
The fact that the other eigenvalues lie in a disk
of radius $O(R^{d+1})$ centered at the origin
proves {\bf iv} .
The proofs for the operators $L$ and $\bfL$ are
similar.
\qed
\section{Largest eigenvalues for $\bf R\le \pi/2$.}
\subsection{Monotonicity of $\bf \Lambda_{d,k}(R)$ in $\bf k$.}
Recall that the wavelength is equal to $2\pi$.
\begin{theorem}
\label{thm:smallR}
For $0\le R\le \pi/2$, $\Lambda_{d,k}(R)$ is strictly
monotonically
decreasing in $k=0,1,2, \dots$.
In particular the largest eigenvalue of $L^*L$
and $\bfL^*\bfL$ is $\Lambda_{d,0}(R)$.
The corresponding
eigenfunctions are constant scalar
and constant vector functions respectively.
\end{theorem}
{\bf Proof.} Write,
\begin{equation}
\label{eq:Jint}
J_k(r) \ =\
\frac{2\,(r/2)^k}{\Gamma[(2k+1)/2]\ \Gamma(1/2)}\
\int_{0}^1
\cos(rs)\
(1-s^2)^{(2k-1)/2}\
ds\,.
\end{equation}
For $0\le r\le \pi/2$ the cosine factor in the integral is positive.
Since $(1-s^2)^{(2k-1)/2}$
is decreasing in $k$ for $s\in [0,1]$,
the integral is
decreasing in $k$.
$\Gamma[(2k+1)/2]$ is increasing in $k$.
Since $r\le 2$, $(r/2)^k$ decreases with $k$.
The proof is complete.
\qed
\begin{remark} The first figure in
\S
\ref{sec:scalarsim}
shows that
the graphs of the functions $\Lambda_{3,0}(R)$
and
and $\Lambda_{3,1}(R)$ cross close to $R=\pi$. The proof
shows that this cannot happen for $R\le \pi/2$.
\end{remark}
\subsection{Largest eigenvalues of $\bfL_M^*\bfL^{}_M$ when
$\bf d=3$, $\bf R\le \pi/2$.}
\begin{theorem}
\label{thm:Maxlargest}
When $d=3$ and $R\le \pi/2$ the strictly largest eigenvalue of
$\bfL_M^*\bfL_M$ is $(\Lambda_{3,0}(R)-\Lambda_{3,2}(R))2/3$.
The eigenfunctions are the scalar multiples of
rotates of $\ell$, and,
$\Lambda_{3,1}(R)$ is the next largest
eigenvalue.
\end{theorem}
The proof uses the following criterion valid for all $d,R$.
For ease of reading, the $R$ dependence of $\Lambda_{d,k}(R)$
is often suppressed.
\begin{theorem}
\label{thm:evcond}
{\bf i.} The eigenvalue
$(\Lambda_{d,0}-\Lambda_{d,2})(d-1)/d$ of
$\bfL_M^*\bfL^{}_M$
is strictly larger than all others only if
\begin{equation}
\label{eq:evcond}
(\Lambda_{d,0}-\Lambda_{d,2})(d-1)/d
\ >\
\Lambda_{d,1}
\,.
\end{equation}
{\bf ii.} If in addition to \eqref{eq:evcond},
the two largest evalues of $L^*L$
are
$\Lambda_{d,0}$ and $\Lambda_{d,1}$,
then
the eigenvalue
$(\Lambda_{d,0}-\Lambda_{d,2})(d-1)/d$
of
$\bfL_M^*\bfL^{}_M$
is strictly larger than the others.
The
eigenfunctions are the scalar multiples of
rotates of $\ell$, and, $\Lambda_{d,1}$ is the next largest
eigenvalue of $\bfL_M^*\bfL^{}_M$.
\end{theorem}
\begin{remark} Equation \eqref{eq:evcond} implies
that $\Lambda_{d,0}>\Lambda_{d,1}$. The
additional condition in {\bf ii} is that for all
$k\ge 1$, $\Lambda_{d,1}\ge \Lambda_{d,k}$.
\end{remark}
{\bf Proof of Theorem \ref{thm:evcond}.} {\bf i.} Since $\Lambda_{d,1}$
is also an eigenvalue
of
$\bfL_M^*\bfL^{}_M$, necessity is clear.
{\bf ii.} Under these hypotheses
Theorem \ref{thm:lambdaofR}
shows that $\Lambda_{d,0}$ is the largest
eigenvalue of $L^*L$ with one dimensional
eigenspace consisting of constant functions.
Corollary \ref{cor:bfLspec} shows that
$\bfL^*\bfL$ has the same largest eigenvalue
with $d$ dimensional eigenspace consisting
of $\CC^d$ valued constant functions.
The next largest eigenvalue of $\bfL^*\bfL$ is
$\Lambda_{d,1}$. In particular,
$\bfL^*\bfL$ has exactly $d$ eigenvalues
counting multiplicity that are greater than
$\Lambda_{d,1}$.
Since $\bfL_M$ is the restriction of $\bfL$
to a closed subspace,
the minmax principal implies that
$\bfL_M^*\bfL^{}_M$
has at most
$d$
eigenvalues
counting multiplicity that are greater than
$\Lambda_{d,1}$.
Theorem \ref{thm:LMspec} provides a
$d$ dimensional eigenspace with
eigenvalue given by the left hand side
of \eqref{eq:evcond} and therefore
greater than $\Lambda_{d,1}$. In particular
there are exactly $d$ eigenvalues counting
multiplicity that are greater than
$\Lambda_{d,1}$.
Theorem \ref{thm:H1} shows that $\Lambda_{d,1}$
is an eigenfunction of $\bfL_M^*\bfL^{}_M$ so it must
be the next largest.
\qed
\begin{example} Parts {\bf i} and {\bf ii}
of Proposition \ref{prop:Lambdaasym}
show that the sufficient condition is satisfied
for small $R$. This gives a second
proof that for small $R$, $\ell$ is an extreme eigenfunction
for $\bfL_M^*\bfL_M$. The first proof is part
{\bf iii} of Theorem \ref{thm:smallR1}
\end{example}
{\bf Proof of Theorem \ref{thm:Maxlargest}.} Verify the sufficient
condition of Theorem \ref{thm:evcond}.ii.
Since $R\le \pi/2$, $\Lambda_{3,k}$ are strictly
decreasing in $k$. Therefore $\Lambda_{3,0}$
and $\Lambda_{3,1}$ are the two largest eigenvalues
of $L^*L$.
It remains to verify
\eqref{eq:evcond}.
Use formulas
\eqref{eq:Lambdadef}
and
\eqref{eq:Jint}.
Formula \eqref{eq:Lambdadef} with $d=3$ and $k=0,1,2$
involves $J_{1/2}, J_{3/2}, J_{5/2}$.
Since the integral in \eqref{eq:Jint} is decreasing in $k$
it follows that
$$
\frac{J_{k+1} (r)}
{ J_k (r)}
\ \le\
\frac{ r } { 2}\
\frac{ \Gamma( (2k+1)/2 ) }
{\Gamma( (2k+1)/2\, +1 ) }
\,.
$$
The functional equation $\Gamma(n+1)=(n+1)\Gamma(n)$
yields
$$
\frac{J_{k+1} (r) }
{ J_k (r) }
\ <\
\frac{ r } { 2}\
\frac{1}
{(2k+3)/2}
\ =\
\frac{r}
{2k+3}
\,.
$$
Therefore,
$$
\frac{J_{3/2} (r) }
{J_{1/2} (r) }
\ <\
\frac{r}{4},
\qquad
{\rm and},
\qquad
\frac{J_{5/2} (r) }
{J_{3/2} (r) }
\ \le\
\frac{r}{6}
\,.
$$
Injecting these estimates in \eqref{eq:Lambdadef}
yields
\begin{equation}
\label{eq:Lambdadecreaase}
\frac{ \Lambda_{3,1}(R) }
{ \Lambda_{3,0}(R) }
\ <\
\frac{R^2}{4^2}
\,,
\qquad
{\rm and},
\qquad
\frac{ \Lambda_{3,2}(R) }
{ \Lambda_{3,1}(R) }
\ <\
\frac{R^2}{6^2}
\,.
\end{equation}
Therefore,
$$
\Lambda_{3,1}
\ \le\
\frac{R^2}{4^2}\,
\Lambda_{3,0},
\quad
{\rm and},
\quad
\Lambda_{3,2}
\ \le\
\frac{R^2}{6^2}\,
\Lambda_{3,1}
\ \le\
\frac{R^2}{4^2}\,
\frac{R^2}{6^2}\,
\Lambda_{3,0}\,,
\quad
{\rm so}\,,
$$
$$
\big(\Lambda_{3,0} -
\Lambda_{3,2}\big)\frac{2}{3} -\Lambda_{3,1}
\ >\
\Lambda_{3,0}\bigg(
\frac{2}{3}
\ -\
\frac{2}{3}\,
\frac{R^4}{4^2 \, 6^2}
\ -\
\frac{R^2}{4^2}
\bigg)
\ :=\
\Lambda_{3,0}\,h(R)\,.
$$
The polynomial $h(R)$ is equal to
$2/3$ when $R=0$ and decreases as $R$
increases. To verify \eqref{eq:evcond} it
suffices to
show that $h(\pi/2)>0$.
Since $2>\pi/2$, $h(\pi/2)>h(2) = 43/108>0$.
\qed
\section{Numerical simulations to determine largest eigenvalues.}
Recall that the wavelength
is equal to $2\pi$. In this section the dimension $d=3$.
Theorem \ref{thm:lambdaofR},
Corollary \ref{cor:bfLspec}, and Theorem
\ref{thm:Maxlargest} allow one in favorable cases
to find the largest eignevalues of $L^*L$, $\bfL^*\bfL$,
and
$\bfL_M^*\bfL^{}_M$, by evaluating the intergrals
defining $\Lambda_{3,k}(R)$ for $k=0,1,2,\dots$.
These quantities decrease rapidly
with $k$ so to compute the largest ones requires little
work.
\subsection{Simulations for scalar waves.}
\label{sec:scalarsim}
For scalar waves the eigenvalues are
exactly
the $\Lambda_{3,k}(R)$.
For $R$ small, they are monotone in $k$
so the optimal focusing is for $k=0$.
Our first simulation (performed with the aid of
Matlab) computes approximately the integrals
defining $\Lambda_{3,k}(R)$ for $R\le 2\pi$
and $k=0,1,2,3$. The resulting graphs
are in the figure on the left. The horizontal axis
is $R$ and
on the vertical axis is plotted
the integral on the right hand side
of
\eqref{eq:Lambdadef}, that is,
$$
\frac
{\Lambda_{3,k}(R)}
{
(2\pi)^{3/2}|S^2|
}
\ =\
\frac
{\Lambda_{3,k}(R)}
{ 2^{7/2}\ \pi^{5/2} }
\,,
\qquad
k=0,1,2,3
\,.
$$
\begin{figure}
\begin{center}
\includegraphics[height=45mm]{FourLambdas.eps}
\hskip.3cm
\includegraphics[height=45mm]{Lambdacrossingzoom.eps}
\end{center}
\end{figure}
The four curves correspond to the four values of $k$.
The graph with the leftmost
hump is $\Lambda_{3,0}(R)$. The graph with the hump
second from the left is $\Lambda_{3,1}(R)$ and so on.
The conclusion is that $\Lambda_{3,0}(R)$ crosses
transversaly the graph
$\Lambda_{3,1}(R)$ just to the right of $R=3$. At that
point, $\Lambda_{3,1}(R)$ becomes the largest.
On the right
is a zoom showing
that the crossing is suspiciously close to $R=\pi$.
The graphs of the $\Lambda_{3,k}(R)$ are a little misleading since
it is not the total energy but the energy density that
is of interest. The next figure plots as a function of
$R$ $\Lambda_{3,0}/(2^{7/2}\pi^{5/2}|B_R(0)|)$.
The small gap near $R=0$ is because the division by
$|B_R(0)|$ is a sensitive operation and leads to numerical
errors in that range.
\begin{figure}
\begin{center}
\includegraphics[height=45mm]{density.eps}
\end{center}
\end{figure}
The energy density is greatest for balls with radius
close to $R=0$. The density drops to half its maximum
value at about $R=2$ which is about 1/3 of the wavelength.
\subsection{Simulations for electromagnetic waves.}
Using Theorem \ref{thm:Maxlargest}
one can investigate the analogous questions
for Maxwell's equations by manipulations of the
$\Lambda_{3,k}(R)$.
The simulations of the preceding subsection show that
for $R\le 3$ one has $\Lambda_{3,0}(R)>\Lambda_{3,1}(R)$.
To show that the eigenvalue corresponding to $\ell(\xi)$
is the optimum it suffices to verify
\eqref{eq:evcond}.
To do so one needs to verify the positivity of
$$
2^{-7/2}\pi^{-5/2}
\bigg(\frac{2}{3}\,
\frac{
\Lambda_{3,0}(R)
}
{
|B_R(0) |
}
\ -\
\frac{2}{3}\,
\frac{
\Lambda_{3,2}(R)
}
{
|B_R(0) |
}
\ -\
\frac{
\Lambda_{3,1}(R)
}
{
|B_R(0) |
}
\bigg)
\,.
$$
This is a linear combination of quantities
computed in the
preceding subsection. Its graph is plotted on the left. The graph
crosses from positive to negative
near $R=2.5$. The criterion is satisfied for all $R$
to the left of this crossing.
\begin{figure}
\begin{center}
\includegraphics[height=45mm]{discriminant.eps}
\hskip.3cm
\includegraphics[height=45mm]{discriminantzoom.eps}
\end{center}
\end{figure}
For $R<2.5$ the energy density for the
optimizing monochromatic
electromagnetic fields associated with $\ell(\xi)$
is equal to
$$
\frac{2}{3}\,
\frac{
\Lambda_{3,0}(R)
}
{
|B_R(0) |
}
\ -\
\frac{2}{3}\,
\frac{
\Lambda_{3,2}(R)
}
{
|B_R(0) |
}
$$
Because of the factor $2/3$ it is smaller than the density
in the scalar case by that factor. The subtraction in the formula
shows that the density drops off more rapidly in the electromagnetic
case than in the scalar case.
The graph of $2^{-7/2}\pi^{-5/2}$ times this quantity is
plotted next.
\begin{center}
\includegraphics[height=45mm]{maxwelldensity.eps}
\end{center}
As in the scalar case the maximal energy density occurs
on balls with radius near zero. The intensity drops to one
half of this value to the left of $R=1.9$.
This is close to the
corresponding value for scalar waves,
about one third of a wavelength.
\vskip.2cm | {"config": "arxiv", "file": "1008.3571/optimalfocus2.tex"} |
\begin{document}
\begin{frontmatter}
\title{Zastavnyi Operators and Positive Definite Radial Functions}
\author[aff63c5b8f064b9863807f549819f278608]{Tarik Faouzi}
\ead{[email protected]}
\author[aff980d40830c7f8bb81f02e241db520037]{Emilio Porcu \corref{contrib-49136fc397578fcb577fd032b6140a3b}}
\ead{[email protected]}\cortext[contrib-49136fc397578fcb577fd032b6140a3b]{Corresponding author.}
\author[aff974d3bf2cad98dbdf933dc09b59dbb8d]{Moreno Bevilacqua}
\ead{[email protected]}
\author[aff2d0cc33cd806ae5c9cbf7f7b3f92f9ad]{Igor Kondrashuk}
\ead{[email protected]}
\address[aff63c5b8f064b9863807f549819f278608]{ Department of Statistics\unskip, Applied Mathematics Research Group
\unskip, University of Bio Bio\unskip, Chile. \\}
\address[aff980d40830c7f8bb81f02e241db520037]{School of Mathematics and Statistics\unskip, University of Newcastle\unskip, UK\unskip. Department of Mathematics and University of Atacama\unskip, Copiap\'o\unskip, and Millennium Nucleus Center
for the Discovery of Structures
in Complex Data\unskip, Chile. \\
}
\address[aff974d3bf2cad98dbdf933dc09b59dbb8d]{University of Valparaiso\unskip,
Department of Statistics\unskip. Millennium Nucleus Center
for the Discovery of Structures
in Complex Data\unskip, Chile. \\}
\address[aff2d0cc33cd806ae5c9cbf7f7b3f92f9ad]{Grupo de Matem\'atica Aplicada\unskip, Departamento de Ciencias B\'asicas\unskip, Universidad del B \'io-B \'io\unskip, Campus Fernando
May, Av. Andres Bello 720, Casilla 447\unskip, Chill\'an\unskip, Chile \\
}
\begin{abstract}
we consider a new operator acting on rescaled weighted differences between two members of the class $\Phi_d$ of positive definite radial functions.
In particular, we study the positive definiteness of the operator for the Mat{\'e}rn, Generalized Cauchy
and Wendland families.
\end{abstract}
\begin{keyword}
Completely Monotonic\sep Fourier Transforms\sep Positive Definite\sep Radial Functions
\end{keyword}
\end{frontmatter}
\section{Introduction}
Positive definite functions are fundamental to many branches of mathematics as well as probability theory, statistics and machine learning amongst others.
There has been an increasing interest in positive definite functions in $d$-dimensional Euclidean spaces ($d$ is a positive integer throughout), and the reader is referred to \cite{Dale14}, \cite{Sc38}, \cite{Schaback:2011}, \cite{wu95} and \cite{Wendland:1995}.
This paper is concerned with the class $\Phi_d$ of continuous functions $\phi:[0, \infty) \mapsto \R$ such that $\phi(0)=1$ and the function $\bx \mapsto \phi(\| \bx \|)$ is positive definite in $\R^d$. The class $\Phi_d$ is nested, with the strict inclusion relation:
$$ \Phi_1 \supset \Phi_2 \supset \cdots \supset \Phi_d \supset \cdots \supset \Phi_{\infty}:= \bigcap_{d \ge 1} \Phi_d. $$
The classes $\Phi_d$ are convex cones that are closed under product, non-negative linear combinations, and pointwise convergence. Further, for a given member $\phi$ in $\Phi_d$, the rescaled function $\phi(\cdot/\alpha)$ is still in $\Phi_d$ for any given $\alpha$. We make explicit emphasis on this fact because it will be repeatedly used subsequently.
For any nonempty set $A\subseteq\R^d$, we call $C(A)$ the set of continuous functions from $A$ into $\R$.
For $p$ a positive integer, let $\boldsymbol{\theta} \in \Theta \subset \R^p$ and let $\{ \phi(\cdot; \boldsymbol{\theta}), \; \boldsymbol{\theta} \in \Theta \}$ be a parametric family belonging to the class $\Phi_d$. For $\varepsilon \in \R$, $\varepsilon \neq 0$ and $0<\beta_1<\beta_2$ with $\beta_i$, $i=1,2$ two scaling parameters, we define the Zastavnyi operator
$K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi]: \Phi_d \mapsto C(\R)$ by
\def\btheta{\boldsymbol{\theta}}
\begin{equation}
\label{zastavnyi1}
K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](t) = \frac{\beta_2^{\varepsilon} \phi \left ( \frac{t}{\beta_2};\btheta\right )-\beta_1^{\varepsilon} \phi \left ( \frac{t}{\beta_1};\btheta\right ) }{\beta_2^{\varepsilon}-\beta_1^{\varepsilon}}, \qquad t \ge 0,
\end{equation}
with $K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](0)=1$. Here, by $\beta_i^{\varepsilon}$ we mean $\beta_i$ raised to the power of $\varepsilon$. It can be namely checked that
$$K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](0) = \frac{\beta_2^{\varepsilon} \phi \left ( \frac{0}{\beta_2};\btheta\right )-\beta_1^{\varepsilon} \phi \left ( \frac{0}{\beta_1};\btheta\right ) }{\beta_2^{\varepsilon}-\beta_1^{\varepsilon}}=\frac{\beta_2^{\varepsilon} -\beta_1^{\varepsilon} }{\beta_2^{\varepsilon}-\beta_1^{\varepsilon}}=1.$$
A motivation for studying positive definiteness of the radial functions $\R^d \ni \mathbf{x} \mapsto K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](\|\mathbf{x}\|)$
comes from the problem of monotonicity of the so--called microergodic parameter
of specific parametric families \citep{Bevilacqua_et_al:2018,Bevilacqua:2018ab} when studying the asymptotic properties of the maximum likelihood estimation under fixed domain asymptotics.
The operator (\ref{zastavnyi1}) is a generalization of the operator proposed in \cite{Porcu:Zastavnyi:Xesbaiat} where $\varepsilon$ is assumed to be positive.
Our problem can be formulated as follows:
\begin{prob}\label{PP}
Let $d$ and $q$ be positive integers.
Let $\{ \phi(\cdot;\boldsymbol{\theta} ), \; \boldsymbol{\theta} \in \Theta \subset \R^q \}$ be a family of functions belonging to the class $\Phi_d$.
Find the conditions on $\varepsilon \in \R$, $\varepsilon \neq 0$ and $\boldsymbol{\theta}$, such that $K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi]$ as defined through (\ref{zastavnyi1}) belongs to the class $\Phi_n$
for some $n=1,2,\ldots$
for given $0<\beta_1<\beta_2$.
\end{prob}
We first note that Problem \ref{PP} has at least two possible solutions. Indeed, direct inspection shows
$$\lim_{\varepsilon \to +\infty} K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](t) = \phi \left ( \frac{t}{\beta_1};\btheta\right ), \quad
\lim_{\varepsilon \to -\infty} K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](t) = \phi \left ( \frac{t}{\beta_2};\btheta\right ), \qquad t\geq0,$$
where the convergence is pointwise in $t$.
The positive definiteness of (\ref{zastavnyi1}), assuming $\varepsilon>0$, has been studied in \cite{Porcu:Zastavnyi:Xesbaiat} when $\phi$ belongs to
the Buhmann class \citep{Buhmann:2001x}. An important special case of the Buhmann class is the
the Generalized Wendland family
\citep{Gneiting:2002b}. For $\kappa>0$,
we define the class ${\cal GW}: [0,\infty) \to \R$ as:
\begin{equation}\label{eq:wendland}
{\cal GW}(t;\kappa,\mu)= \begin{cases} \frac{\int_{ t}^{1} u(u^2- t^2)^{\kappa-1} (1-u)^{\mu}\,\rm{d}u}{B(2\kappa,\mu+1)} ,& 0 \leq t < 1,\\ 0,& t \geq 1, \end{cases}
\end{equation}
and, for $\kappa=0$, by continuity we have
\begin{equation}\label{eq:wendland1}
{\cal GW}(t;0,\mu)= \begin{cases} (1-t)^{\mu} ,& 0 \leq t < 1,\\ 0,& t \geq 1. \end{cases}
\end{equation}
The function ${\cal GW}(t; \kappa,\mu)$ is a member of the class $\Phi_{d}$ if and only if $\mu\geq 0.5(d+1)+\kappa$ \citep{Zastavnyi:2002}.
\cite{Porcu:Zastavnyi:Xesbaiat} found that if $\phi(\cdot; \btheta)= {\cal GW}(\cdot; {\kappa,\mu})$ and $\varepsilon>0$
then
$K_{\varepsilon; \kappa,\mu;\beta_2,\beta_1}[{\cal GW}](t)$ is positive definite
if $\mu \geq (d + 7)/2 + \kappa$ and $\epsilon\geq 2\kappa+1$.
This paper is especially interested to the solution of Problem \ref{PP} when considering two celebrated parametric families:
\begin{description}
\item[The Mat{\'e}rn family.] In this case, $\phi(\cdot; \btheta)= {\cal M}(\cdot; \nu)$, so that the $\boldsymbol{\theta}= \nu$, a scalar and $\Theta=(0,\infty)$,
with
\begin{equation}
\label{matern} {\cal M}(t; \nu) = \frac{2^{1-\nu}}{\Gamma(\nu)} t^{\nu} {\cal K}_{\nu}(t), \qquad t \ge 0,
\end{equation}
where ${\cal K}_{\nu}$ is the modified Bessel function of the second kind of order $\nu>0$ \citep{Abra:Steg:70}. The functions ${\cal M}(\cdot; \nu)$, $\nu>0$, belong to the class $\Phi_{\infty}$ \citep{Stein:1999}.
\item[The Generalized Cauchy family.] In this case $\phi(\cdot; \btheta)= {\cal C}(\cdot; {\delta,\lambda})$, so that $\boldsymbol{\theta}=(\delta,\lambda)^{\top}$, with $\top$ denoting the transpose operator. Here, $\Theta=(0,2] \times (0,\infty)$, and
\begin{equation} \label{cauchy}
{\cal C}(t; {\delta,\lambda}) = \left ( 1+ t^{\delta} \right )^{-\lambda/\delta}, \qquad t \ge 0.
\end{equation}
The functions ${\cal C}(\cdot; \delta, \lambda)$ belong to the class $\Phi_{\infty}$ \citep{GneiSchla04}.
\end{description}
Additionally, we provide a solution to Problem \ref{PP} when $\phi(\cdot; \btheta)= {\cal GW}(\cdot; {\kappa,\mu})$, $\btheta=(\kappa,\mu)^{\top}$, $\Theta=[0,\infty) \times (0,\infty)$, assuming $\varepsilon <0$.
To give an idea of how the operator $K_{\varepsilon; \boldsymbol{\theta};\beta_2,\beta_1}[\phi](t)$ acts on $\phi(t,\boldsymbol{\theta})$ for given $0<\beta_1<\beta_2$
when $\phi$ is the Mat{\'e}rn family,
Figure \ref{fig:fcovt222} (A) compares
$K_{\varepsilon;0.5;\beta_2,\beta_1}[{\cal M}](t)$ with ${\cal M}(t/\beta_1;0.5)$ and
${\cal M}(t/\beta_2;0.5)$ when $\beta_1=0.075$, $\beta_2=0.15$, $\varepsilon=1$ (red line) and $\varepsilon=-2$ (blue line). We note that the behaviour at the origin of $K_{1;0.5;\beta_2,\beta_1}[{\cal M}](t)$ changes drastically with respect to
the behaviour at the origin of ${\cal M}(\cdot;0.5)$. Moreover, $K_{-2;0.5;\beta_2,\beta_1}[{\cal M}](t)$ can attain negative values. It turns out from Theorem \ref{theo11} that $K_{1;0.5;\beta_2,\beta_1}[{\cal M}](t) \in \Phi_{\infty}$ and $K_{-2;0.5;\beta_2,\beta_1}[{\cal M}](t) \in \Phi_2$.
A similar graphical representation is given in Figure \ref{fig:fcovt222} (B) when $\phi\equiv {\cal C}$, the Cauchy family in (\ref{cauchy}). In this case, we consider $\varepsilon =-0.7, 1.25$, $\delta=0.6$, $\lambda=2.5$, $\beta_1=0.2$, $\beta_2=0.3$. Note that,under this setting, $K_{-1.25;0.6,2.5;\beta_2,\beta_1}[{\cal C}](t)$ attains negative values as well.
It turns out from from Theorem \ref{theo22} $K_{\varepsilon;2.5,0.6;\beta_2,\beta_1}[{\cal C}](t)\in \Phi_{\infty}$ for $\varepsilon=-0.7$ and $1.25$.
\noindent Finally, Figure \ref{fig:fcovt222} (C) compares
$K_{\varepsilon;0,4.5;\beta_2,\beta_1}[{\cal GW}](t)$ with ${\cal GW}(t/\beta_1;0,4.5)$ and
${\cal GW}(t/\beta_2;0,4.5)$ with $\beta_2=0.6$, $\beta_1=0.4$ when $\varepsilon=1$ (red line) and $\varepsilon=-2$ (blue line). As for the Mat{\'e}rn case, the behaviour at the origin of $K_{\varepsilon;0,4.5;\beta_2,\beta_1}[{\cal GW}](t)$ changes neatly with respect to
the behaviour at the origin of ${\cal GW}(\cdot;0,4.5)$.
Moreover $K_{-2;0,4.5;\beta_2,\beta_1}[{\cal GW}](t)$ can reach negative values. It turns out from Theorem \ref{theo111} that $K_{-2;0,4.5;\beta_2,\beta_1}[{\cal GW}](t)$
belongs to
$\Phi_2$.
\begin{figure}[h!]
\begin{tabular}{ccc}
\includegraphics[width=5.2cm, height=6.5cm]{11.eps}& \includegraphics[width=5.2cm, height=6.5cm]{22.eps}& \includegraphics[width=5.2cm, height=6.5cm]{33.eps} \\
(A)&(B)&(C)\\
\end{tabular}
\caption{From left to right: (A) comparison of $K_{\varepsilon;\nu;\beta_2,\beta_1}[{\cal M}](t)$
when $\varepsilon=1$ (red line) and $\varepsilon=-2$ (blue line) with ${\cal M}(t/\beta_1;\nu)$ and
${\cal M}(t/\beta_2;\nu)$ when $\nu=0.5$, $\beta_1=0.075$, $\beta_2=0.15$.
(B) comparison of $K_{\varepsilon;\delta,\lambda;\beta_2,\beta_1}[{\cal C}](t)$
when $\varepsilon=0.7$ (red line) and $\varepsilon=-1.25$ (blue line) with ${\cal C}(t/\beta_1;\delta,\lambda)$ and
${\cal C}(t/\beta_2;\delta,\lambda)$ when $\delta=0.6$, $\lambda=2.5$, $\beta_1=0.2$, $\beta_2=0.3$.
(C) comparison of $K_{\varepsilon;\mu,\kappa;\beta_2,\beta_1}[{\cal GW}](t)$
when $\varepsilon=1$ (red line) and $\varepsilon=-2$ (blue line) with ${\cal GW}(t/\beta_1;\mu,\kappa)$ and
${\cal GW}(t/\beta_2;\mu,\kappa)$ when $\mu=4.5$, $\kappa=0$, $\beta_1=0.4$, $\beta_2=0.6$.
\label{fig:fcovt222}}
\end{figure}
These three examples show that operator (\ref{zastavnyi1}) can change substantially the features of given families $\phi$
in terms of both differentiability at the origin and negative correlations.
The solution of Problem \ref{PP} for the Mat{\'e}rn and Cauchy families, passes necessarily through the specification of the properties of the radial Fourier transforms of the radially symmetric functions ${\cal M}(\| \cdot \|; \nu)$ and ${\cal C}(\|\cdot\|; \delta, \lambda)$ in $\R^d$. For the Mat{\'e}rn family, such a Fourier transform is available in closed form. For the Generalized Cauchy family we obtain the Fourier transform as series expansions generalizing a recent result obtained by \cite{Lim2010}. The plan of the paper is the following: Section 2 contains the necessary preliminaries and background. Section 3 gives the main results of this paper.
\section{Preliminaries}
We start with some expository material. A function $f: \R^d \to \R$ is called positive definite if, for any collection $\{ a_k \}_{k=1}^N \subset \R$ and points $\bx_1, \ldots, \bx_N \in \R^d$, the following holds:
$$ \sum_{k=1}^N \sum_{h=1}^N a_k f(\bx_k-\bx_h ) a_h \ge 0. $$
By Bochner's theorem, continuous positive definite functions are the Fourier transforms of positive and bounded measures, that is
\def\bomega{\boldsymbol{\omega}}
\begin{equation}
f(\bx) = \int_{\R^d } {\rm e}^{\mathsf{i} \langle \bx, \boldsymbol{z} \rangle} F ({\rm d} \boldsymbol{z}), \qquad \bx \in \R^d,
\end{equation} where $\langle \cdot,\cdot\rangle$ denotes inner product and where $\mathsf{i}$ is the complex number such that $\mathsf{i}^2=-1$.
Additionally, if $f(\bx)=\phi(\|\bx\|)$ for some continuous function defined on the positive real line, Schoenberg's theorem \citep[][with the references therein]{Dale14} shows that $f$ is positive definite if and only if its {\em radial part} $\phi$ can be written as
\begin{equation}
\label{schoenberg} \phi(t) = \int_{[0,\infty)} \Omega_d(\xi t) G_d({\rm d} \xi), \qquad t \ge 0,
\end{equation}
where $G_d$ is a positive and bounded measure, and where $$ \Omega_{d}(t) = t^{-(d-2)/2} J_{(d-2)/2}(t), \qquad t \ge 0, $$
where $J_{\nu}$ defines a Bessel function of order $\nu$. If $\phi(0)=1$, then $G_d$ is a probability measure \citep{Dale14}. Classical Fourier inversion in concert with Bochner's theorem shows that the function $\phi$ belongs to the class $\Phi_d$ if and only if it admits the representation (\ref{schoenberg}), and this in turn happens if and only if the function $\widehat{\phi}_d: [0,\infty) \to [0,\infty)$, defined through
\begin{equation} \label{FT}
\widehat{\phi}_d(z):= {\cal F}_d[\phi(t)](z)= \frac{z^{1-d/2}}{(2 \pi)^d} \int_{0}^{\infty} t^{d/2} J_{d/2-1}(tz) \phi(t) {\rm d} t, \qquad z \ge 0,
\end{equation}
is nonnegative and such that $\int_{[0,\infty)} \widehat{\phi}_d(z) z^{d-1} {\rm d} t < \infty$. Note that we intentionally put a subscript $d$ into $G_d$ and $\widehat{\phi}_d$ to emphasize the dependence on the dimension $d$ corresponding to the class $\Phi_d$ where $\phi$ is defined. This is explicitly stated in \cite{Dale14}, where it is explained that for any member of the class $\Phi_d$ there exists at least $G_1, \ldots, G_d$ non negative bounded measures in the representation (\ref{schoenberg}). Hence the term $d$-Schoenberg measures proposed therein. Finally, a convergence argument as much as in Schoenberg \citep{Sc38} shows that $\phi \in \Phi_{\infty}$ if and only if
\begin{equation}\label{Boch}
\phi(\sqrt{t}) = \int_{[0,\infty)} {\rm e}^{-\xi t } G({\rm d} \xi), \qquad t \ge 0,
\end{equation}
for $G$ positive, nondecreasing and bounded. Thus, $\psi:=\phi(\sqrt{.})$ is the Laplace transform of $G$, which shows, in concert with Bernstein's theorem, that the function $\psi$ is completely monotonic on the positive real line, that is $\psi$ is infinitely often differentiable and $(-1)^k \psi^{(k)}(t) \ge 0$, $t>0$, $k \in \mathbb{N}$. Here, $\psi^{(k)}$ denotes the $k$-th order derivative of $\psi$, with $\psi^{(0)}=\psi $ by abuse of notation.
For a given $d \in \mathbb{N}$, direct inspection shows that for any scale parameter $\beta>0$, the Fourier transform (\ref{FT}) of $\phi(\cdot/\beta)$ is identically equal to $\beta^{d} \widehat{\phi}_{d}( \beta \cdot)=:\widehat{\phi}_{d,\beta}(\cdot)$, and we shall repeatedly make use of this fact for the results following subsequently.
It is well known that the Fourier transform of the Mat\'ern covariance function ${\cal M}$ in Equation (\ref{matern}) is given by \citep{Stein:1999}
\begin{equation} \label{stein10}
\widehat{{\cal M}}_{d,\beta}(z;\nu)= \frac{\Gamma(\nu+d/2)}{\pi^{d/2} \Gamma(\nu)}
\frac{\beta^d}{(1+\beta^2z^2)^{\nu+d/2}}
, \qquad z \ge 0.
\end{equation}
An ingenious approach in \cite{LiTe09} shows that the Fourier transform of Generalized Cauchy covariance function ${\cal C}$ in Equation (\ref{cauchy}) can be written as
\begin{equation} \label{stein1}
\widehat{{\cal C}}_{d,\beta}(z;\delta,\lambda) =-\frac{
\beta^{d/2+1}z^{-d}}{2^{d/2-1}\pi^{d/2+1}}\Im\left(\int_0^\infty\frac{\mathcal{K}_{(d-2)/2}(
\beta t)}{(1+e^{i\frac{\pi\delta}{2}}(t/z)^{\delta})^{\lambda/\delta}}t^{d/2} {\rm d}t\right), \qquad z > 0,
\end{equation} where $\Im$ denotes the imaginary part of a complex argument, ${\cal K}_{(d-2)/2}$ is the modified Bessel function of the second kind of order $(d-2)/2$ and $\beta$ is a scale parameter. A closed form expression for $\widehat{{\cal C}}_{d,\beta}$ has been elusive for longtime. \cite{Lim2010} shed some light for this problem giving a infinite series representation of the spectral density
under some specific restriction of the parameters.
The result following below
shows that the series representation given in \cite{Lim2010} is valid without any restriction on the parameters.
\begin{thm}\label{the3}
Let ${\cal C}(\cdot;\delta,\lambda)$ be the Generalized Cauchy covariance function
as defined in Equation (\ref{cauchy}).
Then, it is true that
\begin{equation}\label{spectrGW_2}
\begin{aligned}
\widehat{\cal C}_{d,\beta}(z;\delta,\lambda)&=
\frac{z^{-d}}{\pi^{d/2}}\frac{1}{\Gamma(\frac{\lambda}{\delta})} \sum_{n=0}^\infty \frac{(-1)^n}{n!}
\frac{\Gamma(\frac{\lambda}{\delta} + n)\Gamma(d/2-(\frac{\lambda}{\delta}+n)\delta/2)}{\Gamma((\frac{\lambda}{\delta}+n)\delta/2)}\left(\frac{z\beta}{2}\right)^{(\frac{\lambda}{\delta}+n)\delta} \\
&+\frac{z^{-d}}{\pi^{d/2}\delta}\frac{2}{\Gamma(\frac{\lambda}{\delta})} \sum_{n=0}^\infty \frac{(-1)^n}{n!}
\frac{\Gamma\left(\displaystyle{\frac{2n+d}{\delta}} \right)\Gamma\left(\frac{\lambda}{\delta}- \displaystyle{\frac{2n+d}{\delta}}\right)}{\Gamma(n+d/2)}\left(\frac{z\beta}{2}\right)^{2n+d},
\end{aligned}
\end{equation}
with $z>0$, where $\delta \in (0,2)$ and $\lambda >0$.
\end{thm}
\begin{proof} The Mellin-Barnes transform is defined through the identity
which is given by
\begin{eqnarray}\label{asser1}
\frac{1}{(1+x)^{\alpha}} = \frac{1}{2\pi \mathsf{i}}\frac{1}{\Gamma(\alpha)}
\oint_{\Lambda} ~x^u\Gamma(-u)\Gamma(\alpha+u) \text{d}u,
\end{eqnarray}
here $\Gamma(\cdot)$ denotes the Gamma function. This representation is valid for any $x \in \R$. The contour $\Lambda$ contains the vertical line which passes between left and right poles in the complex plane $u$
from negative to positive imaginary infinity, and should be closed to the left in case $x>1$, and to the right complex infinity if $0<x<1.$ \\
We now proceed to compute $\widehat{{\cal C}}_{d,\beta}$ as follows:
\begin{equation*}
\begin{aligned}
\widehat{{\cal C}}_{d,\beta}(\norm{\boldsymbol{z}};\delta,\lambda) =&
\mathcal{F}_d[\mathcal{C}_{d,\beta}(t;\delta,\lambda)](\norm{\boldsymbol{z}})\\
=&\beta^d \mathcal{F}_d[\mathcal{C}_{d}(t;\delta,\lambda)](\beta \norm{\boldsymbol{z}}), \qquad \boldsymbol{z} \in \R^d. \\
\end{aligned}
\end{equation*}
Applying Equation (\ref{asser1}), we obtain
\begin{equation*}
\begin{aligned}
\widehat{{\cal C}}_{d,\beta}(\norm{\boldsymbol{z}};\delta,\lambda)=&\frac{\beta^d}{(2\pi)^d}\frac{1}{\Gamma(\frac{\lambda}{\delta})}\int_{\R^d} {\rm e}^{ \mathsf{i} \beta \langle\boldsymbol{z},\boldsymbol{x}\rangle}\oint_{\Lambda} \Gamma(-u) \Gamma(u+\frac{\lambda}{\delta})\norm{\boldsymbol{x}}^{u\delta} \text{d}u~ \text{d} \boldsymbol{x} \\
=&\frac{\beta^d}{(2\pi)^d}\frac{1}{\Gamma(\frac{\lambda}{\delta})}\oint_{\Lambda} \Gamma(-u) \Gamma(u+\frac{\lambda}{\delta})\int_{\R^d} {\rm e}^{\mathsf{i} \beta \langle\boldsymbol{z}, \boldsymbol{x}\rangle}\norm{\boldsymbol{x}}^{u\delta}\text{d} \boldsymbol{x} \text{d}u~. \\
\end{aligned}
\end{equation*}
We now invoke the well known relationship \citep{allendes2013solution},
\begin{eqnarray*} \int_{\R^d}
{\rm e}^{\mathsf{i} \langle\boldsymbol{z},\boldsymbol{x}\rangle}\norm{\boldsymbol{x}}^{u\delta}\text{d} \boldsymbol{x}=\frac{2^{d+u\delta}\pi^{d/2}\Gamma(d/2+u\delta/2)}{\Gamma(-u\delta/2)\norm{\boldsymbol{z}}^{d
+ u\delta}},
\end{eqnarray*}
and, by abuse of notation, we now write $z:=\norm{\boldsymbol{z}}$. We have
\begin{eqnarray} \label{tonto-lava}
\widehat{\cal C}_{d,\beta}(z;\delta,\lambda)&=&\frac{\beta^d}{(2\pi)^d}\frac{1}{\Gamma(\frac{\lambda}{\delta})}\frac{1}{2\pi \mathsf{i} }\oint_{\Lambda} ~\Gamma(-u) \Gamma(u+\frac{\lambda}{\delta}) \frac{2^{d+u\delta}\pi^{d/2} \Gamma(d/2+u\delta/2)}{\Gamma(-u\delta/2)}\frac{1}{{(\beta z)}^{d + u\delta}} \text{d}u \nonumber \\
&=&\frac{z^{-d}}{\pi^{d/2}}\frac{1}{\Gamma(\frac{\lambda}{\delta})}\frac{1}{2\pi \mathsf{i} } \oint_{\Lambda} \frac{\Gamma(-u) \Gamma(u +\frac{\lambda}{\delta})\Gamma(d/2+u\delta/2)}{\Gamma(-u\delta/2)}\left(\frac{2}{z\beta}\right)^{u\delta} \text{d}u.
\end{eqnarray}
For any given value of $|2/z\beta|,$ it does not matter whether it is smaller or greater than $1$. In fact, we may close the contour to the left complex infinity.
This is a convergent series for any values of the variable $z$ because we have a situation when denominator supresses numerator in the coefficient in front of powers of $z.$
We now observe that the functions $u \mapsto \Gamma(\frac{\lambda}{\delta} + u)$ and $u \mapsto \Gamma\left(d/2+\frac{u\delta}{2}\right)$ contain poles in the complex plane, respectively when $\frac{\lambda}{\delta} + u = -n$, and when $d/2+\frac{u\delta}{2} = -n,$ $ n \in \mathbb{N}$. Using this fact and through direct inspection we obtain that the right hand side in (\ref{tonto-lava}) matches with (\ref{spectrGW_2}). The proof is completed.
\end{proof}
\section{Main Results}
We start by providing a solution to Problem \ref{PP} when $\phi(\cdot; \btheta)= {\cal M}(\cdot; \nu)$, so that $\boldsymbol{\theta} \equiv \nu$ and $\Theta=(0,\infty)$.
\begin{thm}\label{theo11}
Let ${\cal M}(\cdot;\nu)$ be the Mat\'ern function as defined in Equation (\ref{matern}). Let $K_{\varepsilon;\nu;\beta_2,\beta_1}[{\cal M}]$ with $0<\beta_1<\beta_2$, be the Zastavnyi operator (\ref{zastavnyi1}) related to the function ${\cal M}(\cdot; \nu)$.
\begin{enumerate}
\item Let $\varepsilon>0$. Then, $K_{\varepsilon;\nu;\beta_2,\beta_1}[{\cal M}] \in \Phi_{\infty}$
if and only if $\varepsilon\geq 2\nu >0$;
\item For a given $d\in\mathbb{N}$, let $\varepsilon<0$. Then, $K_{\varepsilon;\nu;\beta_2,\beta_1}[{\cal M}] \in\Phi_{d} $
if and only if $\varepsilon\leq -d<0$.
\end{enumerate}
\end{thm}
The following result gives some conditions for the solution of Problem \ref{PP} when $\phi(\cdot; \btheta)= {\cal GW}(\cdot;\mu,\kappa)$, so that $\btheta=(\kappa,\mu)^{\top}$, $\Theta=[0,\infty) \times (0,\infty)$.
\begin{thm}\label{theo111}
Let $d$ be a positive integer. Let ${\cal GW}(\cdot;\mu,\kappa)$ be the function defined through Equations (\ref{eq:wendland}) and (\ref{eq:wendland1}), for $\kappa>0$ and $\kappa=0$ respectively. Let $K_{\varepsilon; \mu,\kappa,\beta_2,\beta_1}[{\cal GW}]$ with $0<\beta_1<\beta_2$, be the Zastavnyi operator (\ref{zastavnyi1})
related to the function ${\cal GW}(\cdot; \mu,\kappa)$. Then:
\begin{enumerate}
\item If $\varepsilon\geq 2\kappa+1>0$ and $\mu\geq (d+7)/2 + \kappa$, then $K_{\varepsilon; \mu,\kappa,\beta_2,\beta_1}[{\cal GW}] \in \Phi_{d}$.
\item $K_{\varepsilon; \mu,\kappa,\beta_2,\beta_1}[{\cal GW}] \in \Phi_{d}$ if and only if $\varepsilon= 2\kappa+1$ and $\mu\geq (d+7)/2 + \kappa$.
\item If $\varepsilon\leq -d<0$ and $\mu\geq (d+7)/2 + \kappa$, then $K_{\varepsilon; \mu,\kappa,\beta_2,\beta_1}[{\cal GW}] \in \Phi_{d} $.
\item $K_{-d; \mu,\kappa,\beta_2,\beta_1}[{\cal GW}] \in \Phi_{d}$ if and only if $\mu\geq (d+1)/2 + \kappa$.
\end{enumerate}
\end{thm}
We now assume that for a fixed $d$, $\lambda>d$. This condition is necessary to ensure integrability of Generalized Cauchy covariance functions, and hence the related spectral density to be bounded. The following result provides a solution to Problem \ref{PP} when $\phi(\cdot; \btheta)= {\cal C}(\cdot; \delta,\lambda)$ for $0<\delta<2$, so that $\Theta=(0,2) \times (d,\infty)$.
\begin{thm}\label{theo22}
Let ${\cal C}(\cdot;\delta,\lambda)$ for $0<\delta<2$ be the Generalized Cauchy function as defined in Equation (\ref{cauchy}). Let $K_{\varepsilon;\delta,\lambda;\beta_2,\beta_1}[{\cal C}]$ with $0<\beta_1<\beta_2$, be the Zastavnyi operator (\ref{zastavnyi1}) related to the function ${\cal C}(\cdot; \delta,\lambda)$.
\begin{enumerate}
\item Let $\varepsilon>0$. If $\varepsilon\geq\delta >0$ and $\delta<1$, then $K_{\varepsilon;\delta,\lambda;\beta_2,\beta_1}[{\cal C}] \in \Phi_{\infty}$;
\item Let $\varepsilon<0$. $K_{\varepsilon;\delta,\lambda;\beta_2,\beta_1}[{\cal C}] \in \Phi_{\infty}$ if and only if $\varepsilon\leq -\lambda <0$.
\end{enumerate}
\end{thm}
Two technical lemmas are needed prove our last result.
\begin{lemma}\label{lem10}
Let $\mathcal{K}_{\nu}:[0,\infty)\to \R$ be the function defined through (\ref{matern}). Let $\nu>0$. Then, for all $z>0$,
\begin{enumerate}
\item $\underset{z\to +\infty}{\lim}z\frac{\mathcal{K}_{\nu}^{\prime}(z)}{\mathcal{K}_{\nu}(z)}=-\infty$, for all $\nu\in(-\infty,+\infty)$;
\item $\underset{z\to +0}{\lim}z\frac{\mathcal{K}_{\nu}^{\prime}(z)}{\mathcal{K}_{\nu}(z)}=-\nu$, \qquad for $\nu>1$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove the two assertions, it is enough to use the following result \citep{Baricz11},
\begin{equation}\label{lem100}
-\sqrt{\frac{\nu}{\nu-1}z^2+\nu^2}<\frac{z\mathcal{K}_{\nu}^{\prime}(z)}{\mathcal{K}_{\nu}(z)}<-\sqrt{z^2+\nu^2},
\end{equation}
where the left hand side of (\ref{lem100}) is true for all $\nu>1$, and the right hand side holds for all $\nu\in \R$.
\end{proof}
\begin{lemma}\label{lem11}
Let ${\cal C}(\cdot;2,\lambda)$ be the Cauchy correlation function as defined at (\ref{cauchy}). Let $\beta>0$. Then, for $d>\lambda/2+2$ and $2\epsilon<-\lambda$ the following assertions are equivalent:
\begin{enumerate}
\item $\beta^{\epsilon}\widehat{{\cal C}}_{d,\beta}(z;2,\lambda)$ is decreasing with respect to $\beta$ on $[0,+\infty)$, for every $z,\lambda$;
\item $\beta^{\varepsilon+\frac{2d+\lambda}{4}}\mathcal{K}_{\frac{2d-\lambda}{4}}(\beta)$ is decreasing with respect to $\beta$ on $[0,+\infty)$, for every $z,\lambda$;
\item $ (\varepsilon+\frac{2d+\lambda}{4})+\beta\frac{\mathcal{K}_{\frac{2d-\lambda}{4}}^{\prime}(\beta)}{\mathcal{K}_{\frac{2d-\lambda}{4}}(\beta)}<0$, $\beta\in [0,\infty)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Showing that $\beta^{\varepsilon}\widehat{{\cal C}}_{d,\beta}(z;2,\lambda)$ is decreasing with respect to $\beta$ is the same as showing that
$\beta^{\varepsilon+\lambda/4+d/2}\mathcal{K}_{d/2-\lambda/4}( \beta)$ is decreasing. Point {\it 2} of Lemma~\ref{lem11} holds if and only if
$$ \beta^{\varepsilon+\frac{2d+\lambda}{4}-1}\left(\left (\varepsilon+\frac{2d+\lambda}{4} \right )+\beta\frac{\mathcal{K}_{\frac{2d-\lambda}{4}}^{\prime}(\beta)}{\mathcal{K}_{\frac{2d-\lambda}{4}}(\beta)}\right)<0.$$ Applying point {\it 2} of Lemma~\ref{lem10} and the fact that $\beta\mathcal{K}_{\frac{2d-\lambda}{4}}^{\prime}(\beta)/\mathcal{K}_{\frac{2d-\lambda}{4}}(\beta)$ is decreasing with respect to $\beta$, the three assertions of Lemma~\ref{lem11} are true if $2d>\lambda+4$ and $2\varepsilon<-\lambda$. The proof is completed.
\end{proof}
We are now able to fix a solution to Problem \ref{PP} when $\phi(\cdot; \boldsymbol{\theta})= {\cal C}(\cdot; 2,\lambda)$, so that $\boldsymbol{\theta}=\nu$ and $\Theta=(0,\infty)$.
\begin{thm}\label{theo33}
Let ${\cal C}(\cdot;2,\lambda)$ be the Cauchy function as defined in Equation (\ref{cauchy})
and let $K_{\varepsilon;2,\lambda;\beta_2,\beta_1}[{\cal C}]$ with $0<\beta_1<\beta_2$ be the Zastavnyi operator (\ref{zastavnyi1}) related to the function ${\cal C}(\cdot; 2,\lambda)$.
Then, for $\lambda<2d-4$, $K_{\varepsilon;2,\lambda;\beta_2,\beta_1}[{\cal C}] \in \Phi_{d} $ provided $2\varepsilon < -\lambda$.
\end{thm}
\begin{proof}
We need to find conditions such that $K_{\varepsilon;2,\lambda;\beta_2,\beta_1}[{\cal C}] \in \Phi_{d}$.
This is equivalent to the following condition:
$$\beta_1^{\varepsilon}\widehat{{\cal C}}_{d,\beta_1}(z;2,\lambda)- \beta_2^{\varepsilon}\widehat{{\cal C}}_{d,\beta_2}(z;2,\lambda)\geq 0.$$
Thus, we need to prove that the function $\beta^{\varepsilon}\widehat{{\cal C}}_{d,\beta}(z;2,\lambda)$ is decreasing with respect to $\beta$. Using Lemma \ref{lem11}, we have that $K_{\varepsilon;2,\lambda;\beta_2,\beta_1}[{\cal C}] \in \Phi_{d}.$
\end{proof}
\section*{Acknowledgements}
The authors dedicate this work to Viktor Zastavnyi for his sixtieth birthday. \\
Partial support was provided by Millennium
Science Initiative of the Ministry
of Economy, Development, and
Tourism, grant "Millenium
Nucleus Center for the
Discovery of Structures in
Complex Data"
for Moreno Bevilacqua and Emilio Porcu,
by FONDECYT grant 1160280, Chile for Moreno Bevilacqua and
by FONDECYT grant 1130647 , Chile for Emilio Porcu
and by grant
Diubb 170308 3/I from the university of Bio Bio for Tarik Faouzi. Tarik Faouzi and Igor Kondrashuk thank the support of project DIUBB 172409 GI/C at University of B{\'\i}o-B{\'\i}o. The work of I.K. was supported
in part by Fondecyt (Chile) Grants No. $1121030$ and by DIUBB (Chile) Grant No. $181409 3/R$.
\bibliographystyle{apalike}
\bibliography{mybib12m}
\end{document} | {"config": "arxiv", "file": "1811.09266/Arxiv-Zastavnyi-operator-09-07-2019.tex"} |
\section{Infimum of Power Set}
Tags: Power Set, Empty Set, Order Theory
\begin{theorem}
Let $S$ be a [[Definition:Set|set]].
Let $\mathcal P \left({S}\right)$ be the [[Definition:Power Set|power set]] of $S$.
Let $\left({\mathcal P \left({S}\right), \subseteq}\right)$ be the [[Definition:Relational Structure|relational structure]] defined on $\mathcal P \left({S}\right)$ by the [[Definition:Relation|relation]] $\subseteq$.
(From [[Subset Relation on Power Set is Partial Ordering]], this is an [[Definition:Ordered Set|ordered set]].)
Then the [[Definition:Infimum of Set|infimum]] of $\left({\mathcal P \left({S}\right), \subseteq}\right)$ is the [[Definition:Empty Set|empty set]] $\varnothing$.
\end{theorem}
\begin{proof}
Follows directly from [[Empty Set is Subset of All Sets]] and the definition of [[Definition:Infimum of Set|infimum]].
{{qed}}
[[Category:Power Set]]
[[Category:Empty Set]]
[[Category:Order Theory]]
cl07ut95vzftu3omo47fhrfz7esyt8x
\end{proof}
| {"config": "wiki", "file": "thm_2233.txt"} |
TITLE: Minimal polynomial with $f(\alpha)=0$
QUESTION [4 upvotes]: I'm learning this stuff for the first time so please bear with me.
I'm given $\alpha=\sqrt{3}+\sqrt{2}i$ and asked to find the minimal polynomial in $\mathbb{Q}[x]$ which has $\alpha$ as a root.
I'm ok with this part, quite sure I can find such a polynomial like so:
$\alpha = \sqrt{3}+\sqrt{2}i$
$\iff \alpha - \sqrt{3} = \sqrt{2}i$
squaring both sides,
$\iff \alpha^2 -2\sqrt{3}\alpha + 3 = -2$
$\iff \alpha^2+5 = 2\sqrt{3}\alpha$
again squaring both sides,
$\iff \alpha^4+10\alpha^2+25 = 12\alpha^2$
$\iff \alpha^4-2\alpha^2+25 = 0$
Therefore $f(x)=x^4-2x^2+25$ should have $f(\alpha)=0$.
My question is how can I be sure this polynomial is minimal, i.e., can I be sure there isn't some lower degree polynomial out there with $\alpha$ as a root?
REPLY [4 votes]: It suffices to prove that $1,\alpha,\alpha^2,\alpha^3$ are linearly independent over $\mathbb Q$. Writing them in the basis $1,\sqrt{3},\sqrt{2}i,\sqrt{6}i$ of $\mathbb Q[\sqrt{3},\sqrt{2}i]$ we get:
$$
\begin{pmatrix}
1 \\ \alpha \\ \alpha^2 \\ \alpha^3
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 2 \\ 0 & -3 & 7 & 0
\end{pmatrix}
\begin{pmatrix}
1 \\ \sqrt{3} \\ \sqrt{2}i \\ \sqrt{6}i
\end{pmatrix}
$$
The key point is that the matrix is invertible.
You can prove this by row reduction or by computing its determinant and checking that it is not zero. | {"set_name": "stack_exchange", "score": 4, "question_id": 3480242} |
\section{Numerical Results}\label{sect:Num}
In this section we provide numerical results that support the assumptions made and the results deduced from these assumptions. Moreover we describe the method used to obtain these results.
All computations are done using the following six new Hecke eigenforms $f_N \in S_2(\Gamma_0(N))$:
\small
\begin{align*}
f_{29} & ={q+(-1+\sqrt 2)q^2 +(1-\sqrt 2) q^3 +(1-2\sqrt 2) q^4-q^5 + \cdots}\\
f_{43} & = q+\sqrt 2 q^2 - \sqrt 2 q^3 +(2-\sqrt 2 ) q^5 + \cdots\\
f_{55} & = q+(1+\sqrt 2)q^2 -2\sqrt 2q^3 +(1+2\sqrt 2 )q^4 -q^5 \cdots\\
f_{23} & = q+\frac{-1+\sqrt 5}{2} q^2- \sqrt 5 q^3 - \frac{1+\sqrt 5} 2 q^4 + (-1+\sqrt 5)q^5 + \cdots \\
f_{87} & =q + \frac{1+\sqrt 5} 2 q^2 + q^3 + \frac{-1+\sqrt 5} 2 q^4 + (1
-\sqrt 5)q^5 \cdots \\
f_{167} & = q+ \frac{-1+\sqrt 5} 2 q^2 -\frac{1+\sqrt 5} 2 q ^3 -\frac{1+\sqrt 5} 2 q ^4 - q^5 + \cdots.
\end{align*}
\normalsize
Note that the level for each of the eigenforms is square-free and the nebentypus is trivial so by Lemma \ref{lemma:prel0} none of these eigenforms have inner twists. Moreover the coefficient field of $f_{29}$, $f_{43}$ and $f_{55}$ is $
\Q(\sqrt 2)$ and the coefficient field of $f_{23}$, $f_{87}$ and $f_{167}$ is $\Q(\sqrt 5 )$. In this section we will denote $\A_{f_N}$, $c_{f_N}$,... by $\A_N$, $c_N$, ... respectively.
As described in Section \ref{section:suth} the Galois orbit of the $p$-th coefficient of a eigenform $f_N$ can be computed from the $L_p$-polynomial of the abelian variety $\A_N$ associated to $f_N$. For each eigenform $f_N$ we give an equation for a hyperelliptic curve $C_N$ such that the Jacobian $J(C_N)$ is isomorphic to the abelian variety $\A_N$. Obtaining such an equation is a non-trivial problem. For levels $29$, $43$ and $55$ the equations are found in \cite[page~42]{Bend} and \cite[page 137]{Wils} for the remaining three. The equations are:
\begin{align*}
C_{29}& :y^2 =x^6 - 2x^5 + 7x^4 - 6x^3 + 13x^2 - 4x + 8, \\
C_{43}& :y^2 = -3x^6- 2x^5 + 7x^4 - 4x^3 - 13x^2 + 10x - 7, \\
C_{55}&:y^2 =-3x^6 + 4x^5 + 16x^4 - 2x^3 - 4x^2 + 4x - 3, \\
C_{23}& :y^2 = x^6 + 2x^5 - 23x^4 + 50x^3 - 58x^2 + 32x - 11, \\
C_{87}&:y^2 =-x^6 + 2x^4 + 6x^3 + 11x^2 + 6x + 3, \\
C_{167}&:y^2 =-x^6 + 2x^5 + 3x^4 - 14x^3 + 22x^2 - 16x + 7.
\end{align*}
Next we use Andrew Sutherlands {smalljac} algorithm described in \cite{KeSu} to compute the coefficients of the $L_p$-polynomial of each hyperelliptic curve $C_N$. This algorithm is implemented in C and is available at Sutherland's web page.
With this method we are able to compute the Galois orbit of the coefficients of one eigenform for all primes up to $10^8$ in less than $50$ hours. All computations are done on a Dell Latitude E6540 laptop with Intel i7-4610M processor (3.0 GHz, 4MB cache, Dual Core). The processing of the data and the creation of the graphs was done using Sage Mathematics Software \cite{sage} on the same machine. The running time of this is negligible compared to the smalljac algorithm.
\subsection{Murty's Conjecture}
First we check Conjecture \ref{conj:Murty}. For each eigenform we plot the number of primes $p<x$ such that the $p$-th coefficient is a rational integer for 50 values of $x$ up to $10^8$. According to this conjecture there exists a constant $c_N$ such that
$$\#\{p<x \textnormal{ prime} \mid a_p(f_N) \in \Q\} \sim c_N \frac{\sqrt x}{\log x}.$$
To check the conjecture we approximate $c_N$ using least squares fitting. Denote this estimate by $\widetilde c_N$. Figure \ref{fig:model} provides numerical evidence for the behaviour of $N(x)$ and column 2 of Table \ref{table:FN} list the values of $\widetilde c_N$ found by least squares fitting.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{LTGModelAll}
\caption{Plot of $\#\{p<x \textnormal{ prime} \mid a_p(f_N) \in \Q\}$ (dots) and $\widetilde c_N \sqrt x/\log x$ (line) using least squares fitting to compute $\widetilde c_N$ for $x$ up to $10^8$. }
\label{fig:model}
\end{figure}
\subsection{The place at infinity}
Corollary \ref{cor:PmCor} states that under Assumption \ref{ass:PmAss} and the generalized Sato-Tate conjecture $$\#\left\{p<x \textnormal{ prime} \mid Z_p \in ]-m\sqrt D/2, m\sqrt D/2[\right\} \sim \frac{16 \sqrt D m}{3\pi^2} \frac {\pi(x)} {\sqrt x}.$$
For $m$ equal to $100$, $500$ and $1000$ Figure \ref{fig:Inf} indicates that $\frac{16m \sqrt D }{3\pi^2} \frac {\pi(x)} {\sqrt x}$ is in fact a good approximation for $\#\{p<x\textnormal{ prime}\mid Z_p \in ]-m\sqrt D/2, m\sqrt D/2[\}$. Although this neither proves Assumption \ref{ass:PmAss} nor the generalized Sato-Tate conjecture, it does confirm that $P_m(x)$ depends on the coefficient field of the eigenform.
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGInf100.png}
\caption{$m=100$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGInf500.png}
\caption{$m=500$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGInf1000.png}
\caption{$m=1000$}
\end{subfigure}
\caption{Plots of $\#\{p<x\textnormal{ prime} \mid Z_p \in ]-m\sqrt D/2, m\sqrt D/2[\}$ (dots), $\frac{16 \sqrt D m}{3\pi^2} \frac {\pi(x)} {\sqrt x}$ (dashed) and $\frac{16 \sqrt D m}{3\pi^2} \frac {\sqrt x} {\log x}$ (full line) for $\sqrt D = \sqrt 5$ (cyan) and $\sqrt D = \sqrt 2$ (red) for each eigenform $f_N$ and $m =100$, $500$, $1000$.}
\label{fig:Inf}
\end{figure}
\subsection{Finite Places}
In \cite{BiDi} Nicolas Billerey and Luis Dieulefait provide explicit bounds on the primes $\ell$ that do not have large image for a given eigenform $f \in S_2(\Gamma_0(N))$ with square-free level $N$. In fact they provide a more general result. We only state the lemma for square-free level.
\begin{lemma}\label{cor:posExcP}
Let $f$ be a eigenform in $S_2(\Gamma_0(N))$. Assume that $N=p_1p_2\cdots p_t$, where $p_1,...,p_t$ are $t\geq 1$ distinct primes and $\ell$ is exceptional. Then $\ell$ divides $15N$ or $p_i^2-1$ for some $1\leq i\leq t$.
\end{lemma}
\begin{proof}
This is the statement of \cite[Theorem~2.6]{BiDi} in the weight $2$ case.
\end{proof}
The eigenforms in our computations have weight $2$ and square-free level so we can apply the lemma and obtain a list of primes that are possibly exceptional for each eigenform (Table \ref{table:FN}).
If $\ell$ is an odd unramified prime with large image Lemmas \ref{lemma:chebdens} and \ref{lemma:FlkFinte} yield
$$\#\{p<x\textnormal{ prime}\mid Z_p \equiv 0 \mod \ell^k\}\sim \pi(x)\cdot\begin{dcases} \frac{\ell^2 + \ell + 1 - \ell^{-2k}}{(\ell+1)(\ell^4-1)\ell^{k-3}} &\textnormal{if } \ell \textnormal{ is inert} \\ \frac{\ell^4+\ell^3 -\ell^2 -2\ell -\ell^{-2k+2}}{(\ell^2-1)^2\ell^{k-1} }& \textnormal{if } \ell \textnormal{ is split.}\end{dcases}$$
For each prime that is possibly exceptional and each eigenform we can confirm that the prime is exceptional by comparing $\#\{p<x\textnormal{ prime}\mid Z_p \equiv 0 \mod \ell^k\}$ with the expected value for a prime with large image for $x$ up to $10^8$ (Fig. \ref{fig:mod}). For $\ell= 2$ we do not have a theoretic result for large image. Therefore none is plotted. The same holds for $\ell=5$ and eigenforms $f_{23}$, $f_{87}$ and $f_{167}$ since $5$ ramifies in $\Q(\sqrt 5)$.
Some primes are inert in $\Q(\sqrt 5)$ and split in $\Q(\sqrt 2)$ or vice versa. So a priori we have two possibilities for the behaviour of $\#\{p<x\textnormal{ prime}\mid Z_p \equiv 0 \mod \ell^k\}$ for a prime $\ell$ with large image. However the first prime for which this occurs is $7$. Indeed $7$ splits in $\Q(\sqrt 2)$ and is inert in $\Q(\sqrt 5)$. For $\ell = 7$ one can hardly distinguish the inert and split case visually.
From Figure \ref{fig:mod} we can confirm that an odd unramified prime is exceptional for a given eigenform if the plot of $\#\{p<x \mid Z_p \equiv 0 \mod \ell^k\}$ differs from that of the large image case. Moreover for any prime $\ell$ we can conclude that the image of the $\ell$-adic representation attached to different eigenforms is distinct. Note that the converse does not hold. Indeed the fact that two eigenforms exhibit the same behaviour with respect to $\#\{p<x \textnormal{ prime} \mid Z_p \equiv 0 \mod \ell^k\}$ does not imply that their $\ell$-adic representations are the same.
For example if $\ell^k= 2$ (Fig. \ref{figure:mod2}), then all eigenforms except $f_{167}$ exhibit the same behaviour. But for $\ell^k=8$ (Fig. \ref{figure:mod8}) we clearly distinguish five different representations. Note that we do not observe this behaviour for any other prime. From $\ell^k=3$ (Fig. \ref{figure:mod3}) we can conclude that $3$ is an exceptional prime for the $3$-adic representation attached to $f_{43}$ and $f_{55}$. The primes that are marked in bold in the last column of Table \ref{table:FN} are the primes for which Figure \ref{fig:mod} confirms the prime is exceptional.
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGMod2.png}
\caption{$\ell^k=2$}
\label{figure:mod2}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGMod8.png}
\caption{$\ell^k=8$}
\label{figure:mod8}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGMod3.png}
\caption{$\ell^k=3$}
\label{figure:mod3}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}\label{figure:mod5}
\includegraphics[width = \linewidth ]{LTGMod5.png}
\caption{$\ell^k=5$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}\label{figure:mod7}
\includegraphics[width = \linewidth ]{LTGMod7.png}
\caption{$\ell^k=7$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}\label{figure:mod11}
\includegraphics[width = \linewidth ]{LTGMod11.png}
\caption{$\ell^k=11$}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}\label{figure:mod29}
\includegraphics[width = \linewidth ]{LTGMod29.png}
\caption{$\ell^k=29$}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}\label{figure:mod43}
\includegraphics[width = \linewidth ]{LTGMod43.png}
\caption{$\ell^k=43$}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}\label{figure:mod83}
\includegraphics[width = \linewidth ]{LTGMod83.png}
\caption{$\ell^k=83$}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}\label{figure:mod167}
\includegraphics[width = \linewidth ]{LTGMod167.png}
\caption{$\ell^k=167$}
\end{subfigure}
\caption{Plots of $\#\{p<x \textnormal{ prime} \mid Z_p \equiv 0 \mod \ell^k\}$ for large image (line) and actual value (dots) for all eigenforms. If $\ell = 2$ no function is plotted for large image.}
\label{fig:mod}
\end{figure}
\subsection{Main Result}
Next we test Theorem \ref{theorem:MainResult} by comparing the behaviour of $\#\{p<x \textnormal{ prime} \mid a_p \in \Q \}$ with $ c_N\sqrt x /\log x$ where $ c_N$ is the constant predicted by Theorem \ref{theorem:MainResult}.
Recall that according to our main theorem under Assumptions \ref{ass:PmAss} and \ref{ass:ind} and the generalized Sato-Tate conjecture
$$c_N = \frac{16\sqrt D}{3\pi ^2} \widehat F.$$
Since $\widehat F$ is a limit by divisibility we approximate it numerically. In order to do so we use the following assumption.
\begin{assumption}\label{ass:IndPrimes}
Let $m$ and $m'$ be co-prime integers. Then
$$P_m(x) \cdot P_{m'}(x) \sim P_{m\cdot m'}(x).$$
\end{assumption}
If $\widehat \rho \ $ is an independent system of representations in the sense of \cite[Section 3]{ser10}, the assumption holds. However $\widehat \rho\ $ is in general not an independent system and the assumption is a much weaker claim. Moreover this assumption is only needed to get a numerical result and our main theorem holds even if this assumption is false. All computations support the assumption.
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod8_3.png}
\caption{$m\cdot m' = 8\cdot 3 $}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod8_5.png}
\caption{$m\cdot m' = 8\cdot 5 $}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod8_7.png}
\caption{$m\cdot m' = 8\cdot 7 $}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod3_5.png}
\caption{$m\cdot m' = 3\cdot 5 $}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod3_7.png}
\caption{$m\cdot m' = 3\cdot 7 $}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndMod5_7.png}
\caption{$m\cdot m' = 5\cdot 7 $}
\end{subfigure}
\caption{Plots of $P_m(x)\cdot P_{m'}(x)/P_{m\cdot m'}(x) $ .}
\label{fig:MainResult}
\end{figure}
Under Assumption \ref{ass:IndPrimes} we can compute an approximation of $\widehat F$ by taking the product over all primes
$$\widehat F = \prod _\ell \widehat F_\ell.$$
For odd unramified primes with large image the factor $\widehat F_\ell$ is given by Lemma \ref{lemma:FlkFinte}. Let $\{\ell_1,...,\ell_t\}$ be the set of primes that are even, ramified or possibly exceptional. For every prime $ \ell_i$ we apply Lemma \ref{lemma:chebdens} for $\ell_i^{k_i}$ with $k_i$ the largest integer such that $\ell_i^{k_i}$ is less than $\sqrt {10^8} /20$, i.e.,$$k_i = \lfloor \log_{\ell_i}( \sqrt {10^8}/20) \rfloor.$$ So we use the following approximation for $c_N$
$$\widehat c_N = \frac{16\sqrt D}{3\pi ^2}\prod_{i=1}^t \ell^{k_i}P_{\ell_i^{k_i}}(10^8) \prod _{\substack{ \ell \text { unramified} \\ \textnormal{with large image} }} \widehat F_\ell. $$
For every eigenform $f_N$ we plot $\widehat c_N \sqrt c / \log x$, $\widehat c_N \pi(x)/\sqrt x$ and $\#\{p<x \textnormal{ prime} \mid a_p \in \Q \}$ (see figure \ref{fig:MainResult}).
Comparing the values of $\widehat c_N$ to the previously found $\widetilde c_N$ by least square fitting yields $1.025 <\widetilde c_N/ \widehat c_N <1.149 $ (Table \ref{table:FN}). This error is to be expected for this small a bound on the primes. For example in the proof of Corollary \ref{cor:PmCor} we use $\frac{\sqrt x}{\log x}$ to approximate $\sum_{p=2}^{x} \frac{1}{2\sqrt p} $. For $x=10^8$ this estimate yields a similar error
$$\frac{\log {10^8}}{\sqrt {10^8}} \cdot \sum_{p=2}^{10^8} \frac{1}{2\sqrt p} =1.146\cdots.$$
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel29.png}
\caption{$N=29$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel43.png}
\caption{$N=43$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel55.png}
\caption{$N=55$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel23.png}
\caption{$N=23$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel87.png}
\caption{$N=87$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel167.png}
\caption{$N=167$}
\end{subfigure}
\caption{Plots of $\#\{p<x \textnormal{ prime} \mid a_p \in \Q \}$ (dots), $\widehat c_N \frac{\sqrt x}{\log x}$ (full line) and $\widehat c_N\frac{\pi(x)}{\sqrt x}$ (dashed line) for all eigenforms $f_N$ with $\widehat c_N$ based on Theorem \ref{theorem:MainResult} .}
\label{fig:MainResult}
\end{figure}
\begin{table}[h]
\centering
\caption{For each eigenform $f_N$ the table contains the level $N$, the constant $\widetilde c_N$ obtained by least square fitting, the constant $\widehat c_N$ according to Theorem \ref{theorem:MainResult}, the error $\widetilde c_N/\widehat c_N$ and the possibly exceptional primes according to Corollary \ref{cor:posExcP} respectively. The confirmed exceptional primes are marked in bold.}
\begin{tabular}{rrrrl}
N & $\widetilde c_{N}\ $ & $\widehat c_N\ $ & $\widetilde c_N/\widehat c_N$& Pos. exc. primes\\
\hline
29 &4.990& 4.517&1.104 & 2, 3, 5, \textbf{7}, 29\\
43 & 4.588&4.204&1.109 &2, \textbf{3}, 5, \textbf{7}, 11, 43\\
55 & 10.515&9.958&1.056 &2, \textbf{3}, 5 ,11\\
23 & 5.490&4.982&1.102 &2, 3, 5, \textbf{11}, 23\\
87 & 4.972&4.413&1.127 &2, 3, \textbf{5}, 7, 29\\
167 & 2.066&1.833&1.127 &2, 3, 5, 7, 83, 167
\end{tabular}
\label{table:FN}
\end{table}
\pagebreak
\subsection{Final Assumption}
The final assumptions we check are Assumptions \ref{ass:ind} and \ref{ass:ind1}. Recall that Assumption \ref{ass:ind} states that for every eigenform $f_N$ and every $\varepsilon>0$ there exists an $m_0$ such that for all $m$ with $m_0| m$ there exists an $x_0$ such that for all
$x>x_0$ $$\left| \frac{P^{m}(x)\cdot P_{m}(x)}{P(x)} -1\right| <\varepsilon.$$
Assumption \ref{ass:ind1} is a much weaker claim and states that the limit
$$ \lim_{x\rightarrow \infty} \frac{P^{m}(x)\cdot P_{m}(x)}{P(x)} $$
exists for a given positive $m$.
For all eigenforms we can find various $m$ such that
$$\left| \frac{P^{m}(x)\cdot P_{m}(x)}{P(x)} -1\right| <0.2$$
for all $x$ larger than $5\cdot 10^7$. For every eigenform we choose different values for $m$ and plot $\frac{P^{m}(x)\cdot P_{m}(x)}{P(x)}$ and the constant functions $1$ and $\frac{P^{m_N}(10^8)\cdot P_{m_N}(10^8)}{P(10^8)}$ (Fig. \ref{fig:Ind}). Where $m_N$ is the largest positive integer used for every eigenform $f_N$. The values of $m$ are chosen so that they increase by divisibility and so that the confirmed exceptional primes divide $m$.
Additionally figure \ref{fig:Ind} provides numerical evidence for Assumption \ref{ass:ind1} which implies the existence of the double limit of $P_m(x)\cdot P^m(x)/P(x)$ by Corollary \ref{cor:MainResult}. However one could argue that the figure suggests that the double limit does not converge to $1$. Let us denote for every $N$
$$ \alpha_N = \limd m \lim_{x \rightarrow \infty} \frac{P^m (x)\cdot P_m(x)}{P(x)}$$ as in Corollary \ref{cor:MainResult}. Then the corollary states that
$$\#\{p < x \textnormal{ prime } \mid a_p(f_N) \in \Q\} \sim \frac{1}{\alpha_N} \frac{16 \sqrt D \widehat F} {3\pi ^{2} }\frac{\sqrt x}{\log x}.$$
We have a convincing estimate $\widehat c_N$ for $c_N$. Moreover $\alpha_{m_N} = {P^{m_N}(10^8)\cdot P_{m_N}(10^8)}/{P(10^8)}$ is the best approximation of $\alpha_N$ available. So we can check this last statement by plotting both functions (Fig. \ref{fig:MainResultAPi}). In this figure $1/\alpha_{m_N} \widehat c_N\frac{\pi(x)}{\sqrt x}$ clearly yields an overestimate when we in fact expect a slight underestimate. This is an indication that, although the convergence might be slow, the double limit equals $1$.
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN29L.png}
\caption{$N=29$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN43L.png}
\caption{$N=43$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN55L.png}
\caption{$N=55$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN23L.png}
\caption{$N=23$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN87L.png}
\caption{$N=87$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGIndN167L.png}
\caption{$N=167$}
\end{subfigure}
\caption{Plots of ${P^{m}(x)\cdot P_{m}(x)}/{P(x)}$ (dots) and the constant functions $1$ (black line) and ${P^{m_N}(10^8)\cdot P_{m_N}(10^8)}/{P(10^8)}$ (coloured line) for each eigenform $f_N$ and various divisors $m$ of $m_N$ for $x$ up to $10^8$.}
\label{fig:Ind}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel29A.png}
\caption{$N=29$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel43A.png}
\caption{$N=43$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel55A.png}
\caption{$N=55$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel23A.png}
\caption{$N=23$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel87A.png}
\caption{$N=87$}
\end{subfigure}
\begin{subfigure}{0.329\textwidth}
\includegraphics[width = \linewidth ]{LTGModel167A.png}
\caption{$N=167$}
\end{subfigure}
\caption{Plots of $\#\{p<x \textnormal{ prime} \mid a_p \in \Q \}$ (dots), $\widehat c_N \frac{\pi (x)}{\sqrt x}$ (full line) and $ 1/\alpha_{m_N}\, \widehat c_N\frac{\pi(x)}{\sqrt x}$ (dashed line) for all eigenforms $f_N$ with $\widehat c_N$ based on Theorem \ref{theorem:MainResult} .}
\label{fig:MainResultAPi}
\end{figure}
\pagebreak | {"config": "arxiv", "file": "1508.07206/LTGNum.tex"} |
TITLE: Isomorphism of quotient fields
QUESTION [1 upvotes]: Consider $\mathbb R $ and $\mathbb Q$ with usual meanings.Which of the following rings are isomorphic?
a. $\mathbb Q[x]/\langle x^2+1\rangle $ and $\mathbb Q[x]/\langle x^2+x+1\rangle$
b. $\mathbb R[x]/\langle x^2+1\rangle $ and $\mathbb R[x]/\langle x^2+x+1\rangle$
My attempt:I know that $\mathbb Q[x]/\langle x^2+1\rangle$, $\mathbb Q[x]/\langle x^2+x+1\rangle$,$\mathbb R[x]/\langle x^2+1\rangle$ and $\mathbb R[x]/\langle x^2+x+1\rangle$ are all fields as the polynomials are all irreducible. Also any element of $\mathbb Q[x]/\langle x^2+1\rangle$ is of the form $ax+b+\langle x^2+1\rangle$. I tried with some familiar mappings like $f(ax+b+\langle x^2+1\rangle)=ax+b+\langle x^2+x+1\rangle$ but did not get the required result. Please give some required hints.
REPLY [0 votes]: (a) The ring $\mathbb{Q}[X]/(X^2 + 1)$ contains an element $x$ with $x^2 = -1$, and this property is preserved by ring isomorphism (since such a map must map $-1$ to $-1$). Show that $A = \mathbb{Q}[X]/(X^2 + X + 1)$ does not contain such an element by, for example, just writing out a basis of $A\subset \mathbb{C}$ over $\mathbb{Q}$.
(b) Both rings are $2$-dimensional field extensions of $\mathbb{R}$ in $\overline{\mathbb{R}} = \mathbb{C}$, which has $[\mathbb{C}:\mathbb{R}] = 2$. | {"set_name": "stack_exchange", "score": 1, "question_id": 1008410} |
TITLE: Subgroup Lattice of D14
QUESTION [1 upvotes]: I've managed to find all of the subgroups of D14 (6 of order 7, 7 of order 2 and 1 of order 1) but I'm really struggling to put this into a lattice as I've never done it for dihedral groups before, only for symmetric groups. How do I find which of my subgroups are subgroups of the other subgroups? It's very easy with symmetric groups but I think I'm missing something here. Thanks in advance.
Edit: by D14 I'm using D2n notation so in this case the shape in question is a heptagon.
REPLY [1 votes]: I could be wrong but I think the only subgroups in the lattice will be D14, <σ>, <ρ> and 1 as anything else will generate the entire group. Not certain though... isn't overly similar to any other Theriault stuff | {"set_name": "stack_exchange", "score": 1, "question_id": 1050158} |
theory Derivations
imports CFG ListTools InductRules
begin
(* leftderives and leftderives1 *)
context CFG begin
lemma [simp]: "is_terminal t \<Longrightarrow> is_symbol t"
by (auto simp add: is_terminal_def is_symbol_def)
lemma [simp]: "is_sentence []" by (auto simp add: is_sentence_def)
lemma [simp]: "is_word []" by (auto simp add: is_word_def)
lemma [simp]: "is_word u \<Longrightarrow> is_sentence u"
by (induct u, auto simp add: is_word_def is_sentence_def)
definition leftderives1 :: "sentence \<Rightarrow> sentence \<Rightarrow> bool"
where
"leftderives1 u v =
(\<exists> x y N \<alpha>.
u = x @ [N] @ y
\<and> v = x @ \<alpha> @ y
\<and> is_word x
\<and> is_sentence y
\<and> (N, \<alpha>) \<in> \<RR>)"
lemma leftderives1_implies_derives1[simp]: "leftderives1 u v \<Longrightarrow> derives1 u v"
apply (auto simp add: leftderives1_def derives1_def)
apply (rule_tac x="x" in exI)
apply (rule_tac x="y" in exI)
apply (rule_tac x="N" in exI)
by auto
definition leftderivations1 :: "(sentence \<times> sentence) set"
where
"leftderivations1 = { (u,v) | u v. leftderives1 u v }"
lemma [simp]: "leftderivations1 \<subseteq> derivations1"
by (auto simp add: leftderivations1_def derivations1_def)
definition leftderivations :: "(sentence \<times> sentence) set"
where
"leftderivations = leftderivations1^*"
lemma rtrancl_subset_implies: "a \<subseteq> b \<Longrightarrow> a \<subseteq> b^*" by blast
lemma leftderivations_subset_derivations[simp]: "leftderivations \<subseteq> derivations"
apply (simp add: leftderivations_def derivations_def)
apply (rule rtrancl_subset_rtrancl)
apply (rule rtrancl_subset_implies)
by simp
definition leftderives :: "sentence \<Rightarrow> sentence \<Rightarrow> bool"
where
"leftderives u v = ((u, v) \<in> leftderivations)"
lemma leftderives_implies_derives[simp]: "leftderives u v \<Longrightarrow> derives u v"
apply (auto simp add: leftderives_def derives_def)
by (rule subsetD[OF leftderivations_subset_derivations])
definition is_leftderivation :: "sentence \<Rightarrow> bool"
where
"is_leftderivation u = leftderives [\<SS>] u"
lemma leftderivation_implies_derivation[simp]:
"is_leftderivation u \<Longrightarrow> is_derivation u"
by (simp add: is_leftderivation_def is_derivation_def)
lemma leftderives_refl[simp]: "leftderives u u"
by (auto simp add: leftderives_def leftderivations_def)
lemma leftderives1_implies_leftderives[simp]:"leftderives1 a b \<Longrightarrow> leftderives a b"
by (auto simp add: leftderives_def leftderivations_def leftderivations1_def)
lemma leftderives_trans: "leftderives a b \<Longrightarrow> leftderives b c \<Longrightarrow> leftderives a c"
by (auto simp add: leftderives_def leftderivations_def)
lemma leftderives1_eq_leftderivations1: "leftderives1 x y = ((x, y) \<in> leftderivations1)"
by (simp add: leftderivations1_def)
lemma leftderives_induct[consumes 1, case_names Base Step]:
assumes derives: "leftderives a b"
assumes Pa: "P a"
assumes induct: "\<And>y z. leftderives a y \<Longrightarrow> leftderives1 y z \<Longrightarrow> P y \<Longrightarrow> P z"
shows "P b"
proof -
note rtrancl_lemma = rtrancl_induct[where a = a and b = b and r = leftderivations1 and P=P]
from derives Pa induct rtrancl_lemma show "P b"
by (metis leftderives_def leftderivations_def leftderives1_eq_leftderivations1)
qed
end
(* Basic lemmas about derives1 and derives *)
context CFG begin
lemma derives1_implies_derives[simp]:"derives1 a b \<Longrightarrow> derives a b"
by (auto simp add: derives_def derivations_def derivations1_def)
lemma derives_trans: "derives a b \<Longrightarrow> derives b c \<Longrightarrow> derives a c"
by (auto simp add: derives_def derivations_def)
lemma derives1_eq_derivations1: "derives1 x y = ((x, y) \<in> derivations1)"
by (simp add: derivations1_def)
lemma derives_induct[consumes 1, case_names Base Step]:
assumes derives: "derives a b"
assumes Pa: "P a"
assumes induct: "\<And>y z. derives a y \<Longrightarrow> derives1 y z \<Longrightarrow> P y \<Longrightarrow> P z"
shows "P b"
proof -
note rtrancl_lemma = rtrancl_induct[where a = a and b = b and r = derivations1 and P=P]
from derives Pa induct rtrancl_lemma show "P b"
by (metis derives_def derivations_def derives1_eq_derivations1)
qed
end
(* Derives1 and Derivation, LDerives1 and LDerivation *)
context CFG begin
definition Derives1 :: "sentence \<Rightarrow> nat \<Rightarrow> rule \<Rightarrow> sentence \<Rightarrow> bool"
where
"Derives1 u i r v =
(\<exists> x y N \<alpha>.
u = x @ [N] @ y
\<and> v = x @ \<alpha> @ y
\<and> is_sentence x
\<and> is_sentence y
\<and> (N, \<alpha>) \<in> \<RR>
\<and> r = (N, \<alpha>) \<and> i = length x)"
lemma Derives1_split:
"Derives1 u i r v \<Longrightarrow> \<exists> x y. u = x @ [fst r] @ y \<and> v = x @ (snd r) @ y \<and> length x = i"
by (metis Derives1_def fst_conv snd_conv)
lemma Derives1_implies_derives1: "Derives1 u i r v \<Longrightarrow> derives1 u v"
by (auto simp add: Derives1_def derives1_def)
lemma derives1_implies_Derives1: "derives1 u v \<Longrightarrow> \<exists> i r. Derives1 u i r v"
by (auto simp add: Derives1_def derives1_def)
lemma Derives1_unique_dest: "Derives1 u i r v \<Longrightarrow> Derives1 u i r w \<Longrightarrow> v = w"
by (auto simp add: Derives1_def derives1_def)
lemma Derives1_unique_src: "Derives1 u i r w \<Longrightarrow> Derives1 v i r w \<Longrightarrow> u = v"
by (auto simp add: Derives1_def derives1_def)
type_synonym derivation = "(nat \<times> rule) list"
fun Derivation :: "sentence \<Rightarrow> derivation \<Rightarrow> sentence \<Rightarrow> bool"
where
"Derivation a [] b = (a = b)"
| "Derivation a (d#D) b = (\<exists> x. Derives1 a (fst d) (snd d) x \<and> Derivation x D b)"
lemma Derivation_implies_derives: "Derivation a D b \<Longrightarrow> derives a b"
proof (induct D arbitrary: a b)
case Nil thus ?case
by (simp add: derives_def derivations_def)
next
case (Cons d D)
note ihyps = this
from ihyps have "\<exists> x. Derives1 a (fst d) (snd d) x \<and> Derivation x D b" by auto
then obtain x where "Derives1 a (fst d) (snd d) x" and xb: "Derivation x D b" by blast
with Derives1_implies_derives1 have d1: "derives a x" by auto
from ihyps xb have d2:"derives x b" by simp
show "derives a b" by (rule derives_trans[OF d1 d2])
qed
lemma Derivation_Derives1: "Derivation a S y \<Longrightarrow> Derives1 y i r z \<Longrightarrow> Derivation a (S@[(i,r)]) z"
proof (induct S arbitrary: a y z i r)
case Nil thus ?case by simp
next
case (Cons s S) thus ?case
by (metis Derivation.simps(2) append_Cons)
qed
lemma derives_implies_Derivation: "derives a b \<Longrightarrow> \<exists> D. Derivation a D b"
proof (induct rule: derives_induct)
case Base
show ?case by (rule exI[where x="[]"], simp)
next
case (Step y z)
note ihyps = this
from ihyps obtain D where ay: "Derivation a D y" by blast
from ihyps derives1_implies_Derives1 obtain i r where yz: "Derives1 y i r z" by blast
from Derivation_Derives1[OF ay yz] show ?case by auto
qed
lemma Derives1_take: "Derives1 a i r b \<Longrightarrow> take i a = take i b"
by (auto simp add: Derives1_def)
lemma Derives1_drop: "Derives1 a i r b \<Longrightarrow> drop (Suc i) a = drop (i + length (snd r)) b"
by (auto simp add: Derives1_def)
lemma Derives1_bound: "Derives1 a i r b \<Longrightarrow> i < length a"
by (auto simp add: Derives1_def)
lemma Derives1_length: "Derives1 a i r b \<Longrightarrow> length b = length a + length (snd r) - 1"
by (auto simp add: Derives1_def)
definition leftmost :: "nat \<Rightarrow> sentence \<Rightarrow> bool"
where
"leftmost i s = (i < length s \<and> is_word (take i s) \<and> is_nonterminal (s ! i))"
lemma set_take: "set (take n s) = { s ! i | i. i < n \<and> i < length s}"
proof (cases "n \<le> length s")
case True thus ?thesis
by (subst List.nth_image[symmetric], auto)
next
case False thus ?thesis
apply (subst set_conv_nth)
by (metis less_trans linear not_le take_all)
qed
lemma list_all_take: "list_all P (take n s) = (\<forall> i. i < n \<and> i < length s \<longrightarrow> P (s ! i))"
by (auto simp add: set_take list_all_iff)
lemma is_sentence_concat: "is_sentence (x@y) = (is_sentence x \<and> is_sentence y)"
by (auto simp add: is_sentence_def)
lemma is_sentence_cons: "is_sentence (x#xs) = (is_symbol x \<and> is_sentence xs)"
by (auto simp add: is_sentence_def)
lemma rule_nonterminal_type[simp]: "(N, \<alpha>) \<in> \<RR> \<Longrightarrow> is_nonterminal N"
apply (insert validRules)
by (auto simp add: is_nonterminal_def)
lemma rule_\<alpha>_type[simp]: "(N, \<alpha>) \<in> \<RR> \<Longrightarrow> is_sentence \<alpha>"
apply (insert validRules)
by (auto simp add: is_sentence_def is_symbol_def list_all_iff is_terminal_def is_nonterminal_def)
lemma [simp]: "is_nonterminal N \<Longrightarrow> is_symbol N"
by (simp add: is_symbol_def)
lemma Derives1_sentence1[elim]: "Derives1 a i r b \<Longrightarrow> is_sentence a"
by (auto simp add: Derives1_def is_sentence_cons is_sentence_concat)
lemma Derives1_sentence2[elim]: "Derives1 a i r b \<Longrightarrow> is_sentence b"
by (auto simp add: Derives1_def is_sentence_cons is_sentence_concat)
lemma [elim]: "Derives1 a i r b \<Longrightarrow> r \<in> \<RR>"
using Derives1_def by auto
lemma is_sentence_symbol: "is_sentence a \<Longrightarrow> i < length a \<Longrightarrow> is_symbol (a ! i)"
by (simp add: is_sentence_def list_all_iff)
lemma is_symbol_distinct: "is_symbol x \<Longrightarrow> is_terminal x \<noteq> is_nonterminal x"
apply (insert disjunct_symbols)
apply (auto simp add: is_symbol_def is_terminal_def is_nonterminal_def)
done
lemma is_terminal_nonterminal: "is_terminal x \<Longrightarrow> is_nonterminal x \<Longrightarrow> False"
by (metis is_symbol_def is_symbol_distinct)
lemma Derives1_leftmost:
assumes "Derives1 a i r b"
shows "\<exists> j. leftmost j a \<and> j \<le> i"
proof -
let ?J = "{j . j < length a \<and> is_nonterminal (a ! j)}"
let ?M = "Min ?J"
from assms have J1:"i \<in> ?J"
apply (auto simp add: Derives1_def is_nonterminal_def)
by (metis (mono_tags, lifting) prod.simps(2) validRules)
have J2:"finite ?J" by auto
note J = J1 J2
from J have M1: "?M \<in> ?J" by (rule_tac Min_in, auto)
{
fix j
assume "j \<in> ?J"
with J have "?M \<le> j" by auto
}
note M3 = this[OF J1]
from assms have a_sentence: "is_sentence a" by (simp add: Derives1_sentence1)
have is_word: "is_word (take ?M a)"
apply (auto simp add: is_word_def list_all_take)
proof -
fix i
assume i_less_M: "i < ?M"
assume i_inbounds: "i < length a"
show "is_terminal (a ! i)"
proof(cases "is_terminal (a ! i)")
case True thus ?thesis by auto
next
case False
then have "is_nonterminal (a ! i)"
using i_inbounds a_sentence is_sentence_symbol is_symbol_distinct by blast
then have "i \<in> ?J" by (metis i_inbounds mem_Collect_eq)
then have "?M < i" by (metis J2 Min_le i_less_M leD)
then have "False" by (metis i_less_M less_asym')
then show ?thesis by auto
qed
qed
show ?thesis
apply (rule_tac exI[where x="?M"])
apply (simp add: leftmost_def)
by (metis (mono_tags, lifting) M1 M3 is_word mem_Collect_eq)
qed
lemma Derivation_leftmost: "D \<noteq> [] \<Longrightarrow> Derivation a D b \<Longrightarrow> \<exists> i. leftmost i a"
apply (case_tac "D")
apply (auto)
apply (metis Derives1_leftmost)
done
lemma nonword_has_nonterminal:
"is_sentence a \<Longrightarrow> \<not> (is_word a) \<Longrightarrow> \<exists> k. k < length a \<and> is_nonterminal (a ! k)"
apply (auto simp add: is_sentence_def list_all_iff is_word_def)
by (metis in_set_conv_nth is_symbol_distinct)
lemma leftmost_cons_nonterminal:
"is_nonterminal x \<Longrightarrow> leftmost 0 (x#xs)"
by (metis CFG.is_word_def CFG_axioms leftmost_def length_greater_0_conv list.distinct(1)
list_all_simps(2) nth_Cons_0 take_Cons')
lemma leftmost_cons_terminal:
"is_terminal x \<Longrightarrow> leftmost i (x#xs) = (i > 0 \<and> leftmost (i - 1) xs)"
by (metis Suc_diff_1 Suc_less_eq is_terminal_nonterminal is_word_def leftmost_def length_Cons
list_all_simps(1) not_gr0 nth_Cons' take_Cons')
lemma is_nonterminal_cons_terminal:
"is_terminal x \<Longrightarrow> k < length (x # a) \<Longrightarrow> is_nonterminal ((x # a) ! k) \<Longrightarrow>
k > 0 \<and> k - 1 < length a \<and> is_nonterminal (a ! (k - 1))"
by (metis One_nat_def Suc_leI is_terminal_nonterminal less_diff_conv2
list.size(4) nth_non_equal_first_eq)
lemma leftmost_exists:
"is_sentence a \<Longrightarrow> k < length a \<Longrightarrow> is_nonterminal (a ! k) \<Longrightarrow>
\<exists> i. leftmost i a \<and> i \<le> k"
proof (induct a arbitrary: k)
case Nil thus ?case by auto
next
case (Cons x a)
show ?case
proof(cases "is_nonterminal x")
case True thus ?thesis
apply(rule_tac exI[where x="0"])
apply (simp add: leftmost_cons_nonterminal)
done
next
case False
then have x: "is_terminal x"
by (metis Cons.prems(1) is_sentence_cons is_symbol_distinct)
note k = is_nonterminal_cons_terminal[OF x Cons(3) Cons(4)]
with Cons have "\<exists>i. leftmost i a \<and> i \<le> k - 1" by (metis is_sentence_cons)
then show ?thesis
apply (auto simp add: leftmost_cons_terminal[OF x])
by (metis le_diff_conv2 Suc_leI add_Suc_right add_diff_cancel_right' k
le_0_eq le_imp_less_Suc nat_le_linear)
qed
qed
lemma nonword_leftmost_exists:
"is_sentence a \<Longrightarrow> \<not> (is_word a) \<Longrightarrow> \<exists> i. leftmost i a"
by (metis leftmost_exists nonword_has_nonterminal)
lemma leftmost_unaffected_Derives1: "leftmost j a \<Longrightarrow> j < i \<Longrightarrow> Derives1 a i r b \<Longrightarrow> leftmost j b"
apply (simp add: leftmost_def)
proof -
assume a1: "j < length a \<and> is_word (take j a) \<and> is_nonterminal (a ! j)"
assume a2: "j < i"
assume "Derives1 a i r b"
then have f3: "take i a = take i b"
by (metis Derives1_take)
have f4: "\<And>n ss ssa. take (length (take n (ss::symbol list))) (ssa::symbol list) = take (length ss) (take n ssa)"
by (metis length_take take_take)
have f5: "\<And>ss. take j (ss::symbol list) = take i (take j ss)"
using a2 by (metis dual_order.strict_implies_order min.absorb2 take_take)
have f6: "length (take j a) = j"
using a1 by (metis dual_order.strict_implies_order length_take min.absorb2)
then have f7: "\<And>n. min j n = length (take n (take j a))"
by (metis length_take)
have f8: "\<And>n ss. n = length (take n (ss::symbol list)) \<or> length ss < n"
by (metis leI length_take min.absorb2)
have f9: "\<And>ss. take j (ss::symbol list) = take j (take i ss)"
using f7 f6 f5 by (metis take_take)
have f10: "\<And>ss n. length (ss::symbol list) \<le> n \<or> length (take n ss) = n"
using f8 by (metis dual_order.strict_implies_order)
then have f11: "\<And>ss ssa. length (ss::symbol list) = length (take (length ss) (ssa::symbol list)) \<or> length (take (length ssa) ss) = length ssa"
by (metis length_take min.absorb2)
have f12: "\<And>ss ssa n. take (length (ss::symbol list)) (ssa::symbol list) = take n (take (length ss) ssa) \<or> length (take n ss) = n"
using f10 by (metis min.absorb2 take_take)
{ assume "\<not> j < j"
{ assume "\<not> j < j \<and> i \<noteq> j"
moreover
{ assume "length a \<noteq> j \<and> length (take i a) \<noteq> i"
then have "\<exists>ss. length (take (length (take i (take (length a) (ss::symbol list)))) (take j ss)) \<noteq> length (take i (take (length a) ss))"
using f12 f11 f6 f5 f4 by metis
then have "\<exists>ss ssa. take (length (ss::symbol list)) (take j (ssa::symbol list)) \<noteq> take (length ss) (take i (take (length a) ssa))"
using f11 by metis
then have "length b \<noteq> j"
using f9 f4 f3 by metis }
ultimately have "length b \<noteq> j"
using f7 f6 f5 f3 a1 by (metis length_take) }
then have "length b = j \<longrightarrow> j < j"
using a2 by metis }
then have "j < length b"
using f9 f8 f7 f6 f4 f3 by metis
then show "j < length b \<and> is_word (take j b) \<and> is_nonterminal (b ! j)"
using f9 f3 a2 a1 by (metis nth_take)
qed
definition derivation_ge :: "derivation \<Rightarrow> nat \<Rightarrow> bool"
where
"derivation_ge D i = (\<forall> d \<in> set D. fst d \<ge> i)"
lemma derivation_ge_cons: "derivation_ge (d#D) i = (fst d \<ge> i \<and> derivation_ge D i)"
by (auto simp add: derivation_ge_def)
lemma derivation_ge_append:
"derivation_ge (D@E) i = (derivation_ge D i \<and> derivation_ge E i)"
by (auto simp add: derivation_ge_def)
lemma leftmost_unaffected_Derivation:
"derivation_ge D (Suc i) \<Longrightarrow> leftmost i a \<Longrightarrow> Derivation a D b \<Longrightarrow> leftmost i b"
proof (induct D arbitrary: a)
case Nil thus ?case by auto
next
case (Cons d D)
then have "\<exists> x. Derives1 a (fst d) (snd d) x \<and> Derivation x D b" by simp
then obtain x where x1: "Derives1 a (fst d) (snd d) x" and x2: "Derivation x D b" by blast
with Cons have leftmost_x: "leftmost i x"
apply (rule_tac leftmost_unaffected_Derives1[
where a=a and j=i and b="x" and i="fst d" and r="snd d"])
by (auto simp add: derivation_ge_def)
with Cons x2 show ?case by (auto simp add: derivation_ge_def)
qed
lemma le_Derives1_take:
assumes le: "i \<le> j"
and D: "Derives1 a j r b"
shows "take i a = take i b"
proof -
note Derives1_take[where a=a and i=j and r=r and b=b]
with le D show ?thesis by (rule_tac le_take_same[where i=i and j=j], auto)
qed
lemma Derivation_take: "derivation_ge D i \<Longrightarrow> Derivation a D b \<Longrightarrow> take i a = take i b"
proof(induct D arbitrary: a b)
case Nil thus ?case by auto
next
case (Cons d D)
then have "\<exists> x. Derives1 a (fst d) (snd d) x \<and> Derivation x D b"
by simp
then obtain x where ax: "Derives1 a (fst d) (snd d) x" and xb: "Derivation x D b" by blast
from derivation_ge_cons Cons(2) have d: "i \<le> fst d" and D: "derivation_ge D i" by auto
note take_i_xb = Cons(1)[OF D xb]
note take_i_ax = le_Derives1_take[OF d ax]
from take_i_xb take_i_ax show ?case by auto
qed
lemma leftmost_cons_less: "i < length u \<Longrightarrow> leftmost i (u@v) = leftmost i u"
by (auto simp add: leftmost_def nth_append)
lemma leftmost_is_nonterminal: "leftmost i u \<Longrightarrow> is_nonterminal (u ! i)"
by (metis leftmost_def)
lemma is_word_is_terminal: "i < length u \<Longrightarrow> is_word u \<Longrightarrow> is_terminal (u ! i)"
by (metis is_word_def list_all_length)
lemma leftmost_append:
assumes leftmost: "leftmost i (u@v)"
and is_word: "is_word u"
shows "length u \<le> i"
proof (cases "i < length u")
case False thus ?thesis by auto
next
case True
with leftmost have "leftmost i u" using leftmost_cons_less[OF True] by simp
then have is_nonterminal: "is_nonterminal (u ! i)" by (rule leftmost_is_nonterminal)
note is_terminal = is_word_is_terminal[OF True is_word]
note is_terminal_nonterminal[OF is_terminal is_nonterminal]
then show ?thesis by auto
qed
lemma derivation_ge_empty[simp]: "derivation_ge [] i"
by (simp add: derivation_ge_def)
lemma leftmost_notword: "leftmost i a \<Longrightarrow> j > i \<Longrightarrow> \<not> (is_word (take j a))"
by (metis is_terminal_nonterminal is_word_def leftmost_def list_all_take)
lemma leftmost_unique: "leftmost i a \<Longrightarrow> leftmost j a \<Longrightarrow> i = j"
by (metis leftmost_def leftmost_notword linorder_neqE_nat)
lemma leftmost_Derives1: "leftmost i a \<Longrightarrow> Derives1 a j r b \<Longrightarrow> i \<le> j"
by (metis Derives1_leftmost leftmost_unique)
lemma leftmost_Derives1_propagate:
assumes leftmost: "leftmost i a"
and Derives1: "Derives1 a j r b"
shows "(is_word b \<and> i = j) \<or> (\<exists> k. leftmost k b \<and> i \<le> k)"
proof -
from leftmost_Derives1[OF leftmost Derives1] have ij: "i \<le> j" by auto
show ?thesis
proof (cases "is_word b")
case True show ?thesis
by (metis Derives1 True ij le_neq_implies_less leftmost
leftmost_unaffected_Derives1 order_refl)
next
case False show ?thesis
by (metis (no_types, opaque_lifting) Derives1 Derives1_bound Derives1_sentence2
Derives1_take append_take_drop_id ij le_neq_implies_less leftmost
leftmost_append leftmost_cons_less leftmost_def length_take
min.absorb2 nat_le_linear nonword_leftmost_exists not_le)
qed
qed
lemma is_word_Derives1[elim]: "is_word a \<Longrightarrow> Derives1 a i r b \<Longrightarrow> False"
by (metis Derives1_leftmost is_terminal_nonterminal is_word_is_terminal leftmost_def)
lemma is_word_Derivation[elim]: "is_word a \<Longrightarrow> Derivation a D b \<Longrightarrow> D = []"
by (metis Derivation_leftmost is_terminal_nonterminal is_word_def
leftmost_def list_all_length)
lemma leftmost_Derivation:
"leftmost i a \<Longrightarrow> Derivation a D b \<Longrightarrow> j \<le> i \<Longrightarrow> derivation_ge D j"
proof (induct D arbitrary: a b i j)
case Nil thus ?case by auto
next
case (Cons d D)
then have "\<exists> x. Derives1 a (fst d) (snd d) x \<and> Derivation x D b" by auto
then obtain x where ax:"Derives1 a (fst d) (snd d) x" and xb:"Derivation x D b" by blast
note ji = Cons(4)
note i_fstd = leftmost_Derives1[OF Cons(2) ax]
note disj = leftmost_Derives1_propagate[OF Cons(2) ax]
thus ?case
proof(induct rule: disjCases2)
case 1
with xb have "D = []" by auto
with 1 ji show ?case by (simp add: derivation_ge_def)
next
case 2
then obtain k where k: "leftmost k x" and ik: "i \<le> k" by blast
note ge = Cons(1)[OF k xb, where j=j]
from ji ik i_fstd ge show ?case
by (simp add: derivation_ge_cons)
qed
qed
lemma derivation_ge_list_all: "derivation_ge D i = list_all (\<lambda> d. fst d \<ge> i) D"
by (simp add: Ball_set derivation_ge_def)
lemma split_derivation_leftmost:
assumes "derivation_ge D i"
and "\<not> (derivation_ge D (Suc i))"
shows "\<exists> E F r. D = E@[(i, r)]@F \<and> derivation_ge E (Suc i)"
proof -
from assms have "\<exists> k. k < length D \<and> fst(D ! k) \<ge> i \<and> \<not>(fst(D ! k) \<ge> Suc i)"
by (metis derivation_ge_def in_set_conv_nth)
then have "\<exists> k. k < length D \<and> fst(D ! k) = i" by auto
then show ?thesis
proof(induct rule: ex_minimal_witness)
case (Minimal k)
then have k_len: "k < length D" and k_i: "fst (D ! k) = i" by auto
let ?E = "take k D"
let ?r = "snd (D ! k)"
let ?F = "drop (Suc k) D"
note split = split_list_at[OF k_len]
from k_i have D_k: "D ! k = (i, ?r)" by auto
show ?case
apply (rule exI[where x="?E"])
apply (rule exI[where x="?F"])
apply (rule exI[where x="?r"])
apply (subst D_k[symmetric])
apply (rule conjI)
apply (rule split)
by (metis (mono_tags, lifting) Minimal.hyps(2) Suc_leI assms(1)
derivation_ge_list_all le_neq_implies_less list_all_length list_all_take)
qed
qed
lemma Derives1_Derives1_swap:
assumes "i < j"
and "Derives1 a j p b"
and "Derives1 b i q c"
shows "\<exists> b'. Derives1 a i q b' \<and> Derives1 b' (j - 1 + length (snd q)) p c"
proof -
from Derives1_split[OF assms(2)] obtain a1 a2 where
a_src: "a = a1 @ [fst p] @ a2" and a_dest: "b = a1 @ snd p @ a2"
and a1_len: "length a1 = j" by blast
note a = this
from a have is_sentence_a1: "is_sentence a1"
using Derives1_sentence2 assms(2) is_sentence_concat by blast
from a have is_sentence_a2: "is_sentence a2"
using Derives1_sentence2 assms(2) is_sentence_concat by blast
from a have is_symbol_fst_p: "is_symbol (fst p)"
by (metis Derives1_sentence1 assms(2) is_sentence_concat is_sentence_cons)
from Derives1_split[OF assms(3)] obtain b1 b2 where
b_src: "b = b1 @ [fst q] @ b2" and b_dest: "c = b1 @ snd q @ b2"
and b1_len: "length b1 = i" by blast
have a_take_j: "a1 = take j a" by (metis a1_len a_src append_eq_conv_conj)
have b_take_i: "b1 @ [fst q] = take (Suc i) b"
by (metis append_assoc append_eq_conv_conj b1_len b_src length_append_singleton)
from a_take_j b_take_i take_eq_take_append[where j=j and i="Suc i" and a=a]
have "\<exists> u. a1 = (b1 @ [fst q]) @ u"
by (metis le_iff_add Suc_leI a1_len a_dest append_eq_conv_conj assms(1) take_add)
then obtain u where u1: "a1 = (b1 @ [fst q]) @ u" by blast
then have j_i_u: "j = i + 1 + length u"
using Suc_eq_plus1 a1_len b1_len length_append length_append_singleton by auto
from u1 is_sentence_a1 have is_sentence_b1_u: "is_sentence b1 \<and> is_sentence u"
using is_sentence_concat by blast
have u2: "b2 = u @ snd p @ a2" by (metis a_dest append_assoc b_src same_append_eq u1)
let ?b = "b1 @ (snd q) @ u @ [fst p] @ a2"
from assms have q_dom: "q \<in> \<RR>" by auto
have a_b': "Derives1 a i q ?b"
apply (subst Derives1_def)
apply (rule exI[where x="b1"])
apply (rule exI[where x="u@[fst p]@a2"])
apply (rule exI[where x="fst q"])
apply (rule exI[where x="snd q"])
apply (auto simp add: b1_len is_sentence_cons is_sentence_concat
is_sentence_a2 is_symbol_fst_p is_sentence_b1_u a_src u1 q_dom)
done
from assms have p_dom: "p \<in> \<RR>" by auto
have is_sentence_snd_q: "is_sentence (snd q)"
using Derives1_sentence2 a_b' is_sentence_concat by blast
have b'_c: "Derives1 ?b (j - 1 + length (snd q)) p c"
apply (subst Derives1_def)
apply (rule exI[where x="b1 @ (snd q) @ u"])
apply (rule exI[where x="a2"])
apply (rule exI[where x="fst p"])
apply (rule exI[where x="snd p"])
apply (auto simp add: is_sentence_concat is_sentence_b1_u is_sentence_a2 p_dom
is_sentence_snd_q b_dest u2 b1_len j_i_u)
done
show ?thesis
apply (rule exI[where x="?b"])
apply (rule conjI)
apply (rule a_b')
apply (rule b'_c)
done
qed
definition derivation_shift :: "derivation \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> derivation"
where
"derivation_shift D left right = map (\<lambda> d. (fst d - left + right, snd d)) D"
lemma derivation_shift_empty[simp]: "derivation_shift [] left right = []"
by (auto simp add: derivation_shift_def)
lemma derivation_shift_cons[simp]:
"derivation_shift (d#D) left right = ((fst d - left + right, snd d)#(derivation_shift D left right))"
by (simp add: derivation_shift_def)
lemma Derivation_append: "Derivation a (D@E) c = (\<exists> b. Derivation a D b \<and> Derivation b E c)"
proof(induct D arbitrary: a c E)
case Nil thus ?case by auto
next
case (Cons d D) thus ?case by auto
qed
lemma Derivation_implies_append:
"Derivation a D b \<Longrightarrow> Derivation b E c \<Longrightarrow> Derivation a (D@E) c"
using Derivation_append by blast
lemma Derivation_swap_single_end_to_front:
"i < j \<Longrightarrow> derivation_ge D j \<Longrightarrow> Derivation a (D@[(i,r)]) b \<Longrightarrow>
Derivation a ((i,r)#(derivation_shift D 1 (length (snd r)))) b"
proof(induct D arbitrary: a)
case Nil thus ?case by auto
next
case (Cons d D)
from Cons have "\<exists> c. Derives1 a (fst d) (snd d) c \<and> Derivation c (D @ [(i, r)]) b"
by simp
then obtain c where ac: "Derives1 a (fst d) (snd d) c"
and cb: "Derivation c (D @ [(i, r)]) b" by blast
from Cons(3) have D_j: "derivation_ge D j" by (simp add: derivation_ge_cons)
from Cons(1)[OF Cons(2) D_j cb, simplified]
obtain x where cx: "Derives1 c i r x" and
xb: "Derivation x (derivation_shift D 1 (length (snd r))) b" by auto
have i_fst_d: "i < fst d" using Cons derivation_ge_cons by auto
from Derives1_Derives1_swap[OF i_fst_d ac cx]
obtain b' where ab': "Derives1 a i r b'" and
b'x: "Derives1 b' (fst d - 1 + length (snd r)) (snd d) x" by blast
show ?case using ab' b'x xb by auto
qed
lemma Derivation_swap_single_mid_to_front:
assumes "i < j"
and "derivation_ge D j"
and "Derivation a (D@[(i,r)]@E) b"
shows "Derivation a ((i,r)#((derivation_shift D 1 (length (snd r)))@E)) b"
proof -
from assms have "\<exists> x. Derivation a (D@[(i, r)]) x \<and> Derivation x E b"
using Derivation_append by auto
then obtain x where ax: "Derivation a (D@[(i, r)]) x" and xb: "Derivation x E b"
by blast
with assms have "Derivation a ((i, r)#(derivation_shift D 1 (length (snd r)))) x"
using Derivation_swap_single_end_to_front by blast
then show ?thesis using Derivation_append xb by auto
qed
lemma length_derivation_shift[simp]:
"length(derivation_shift D left right) = length D"
by (simp add: derivation_shift_def)
definition LeftDerives1 :: "sentence \<Rightarrow> nat \<Rightarrow> rule \<Rightarrow> sentence \<Rightarrow> bool"
where
"LeftDerives1 u i r v = (leftmost i u \<and> Derives1 u i r v)"
lemma LeftDerives1_implies_leftderives1: "LeftDerives1 u i r v \<Longrightarrow> leftderives1 u v"
by (metis Derives1_def LeftDerives1_def append_eq_conv_conj leftderives1_def
leftmost_def)
lemma leftmost_Derives1_leftderives:
"leftmost i a \<Longrightarrow> Derives1 a i r b \<Longrightarrow> leftderives b c \<Longrightarrow> leftderives a c"
using LeftDerives1_def LeftDerives1_implies_leftderives1
leftderives1_implies_leftderives leftderives_trans by blast
theorem Derivation_implies_leftderives_gen:
"Derivation a D (u@v) \<Longrightarrow> is_word u \<Longrightarrow> (\<exists> w.
leftderives a (u@w) \<and>
(v = [] \<longrightarrow> w = []) \<and>
(\<forall> X. is_first X v \<longrightarrow> is_first X w))"
proof (induct "length D" arbitrary: D a u v)
case 0
then have "a = u@v" by auto
thus ?case by (rule_tac x = v in exI, auto)
next
case (Suc n)
from Suc have "D \<noteq> []" by auto
with Suc Derivation_leftmost have "\<exists> i. leftmost i a" by auto
then obtain i where i: "leftmost i a" by blast
show "?case"
proof (cases "derivation_ge D (Suc i)")
case True
with Suc have leftmost: "leftmost i (u@v)"
by (rule_tac leftmost_unaffected_Derivation[OF True i], auto)
have length_u: "length u \<le> i"
using leftmost_append[OF leftmost Suc(4)] .
have take_Suc: "take (Suc i) a = take (Suc i) (u@v)"
using Derivation_take[OF True Suc(3)] .
with length_u have is_prefix_u: "is_prefix u a"
by (metis append_assoc append_take_drop_id dual_order.strict_implies_order
is_prefix_def le_imp_less_Suc take_all take_append)
have a: "u @ drop (length u) a = a"
using is_prefix_unsplit[OF is_prefix_u] .
from take_Suc have length_take_Suc: "length (take (Suc i) a) = Suc i"
by (metis Suc_leI i leftmost_def length_take min.absorb2)
have v: "v \<noteq> []"
proof(cases "v = []")
case False thus ?thesis by auto
next
case True
with length_u have right: "length(take (Suc i) (u@v)) = length u" by simp
note left = length_take_Suc
from left right take_Suc have "Suc i = length u" by auto
with length_u show ?thesis by auto
qed
then have "\<exists> X w. v = X#w" by (cases v, auto)
then obtain X w where v: "v = X#w" by blast
have is_first_X: "is_first X (drop (length u) a)"
apply (rule_tac is_first_drop_length[where v=v and w=w and k="Suc i"])
apply (simp_all add: take_Suc v)
apply (metis Suc_leI i leftmost_def)
apply (insert length_u)
apply arith
done
show ?thesis
apply (rule exI[where x="drop (length u) a"])
by (simp add: a v is_first_cons is_first_X)
next
case False
have Di: "derivation_ge D i"
using leftmost_Derivation[OF i Suc(3), where j=i, simplified] .
from split_derivation_leftmost[OF Di False]
obtain E F r where D_split: "D = E @ [(i, r)] @ F"
and E_Suc: "derivation_ge E (Suc i)" by auto
let ?D = "(derivation_shift E 1 (length (snd r)))@F"
from D_split
have "Derivation a ((i,r) # ?D) (u @ v)"
using Derivation_swap_single_mid_to_front E_Suc Suc.prems(1) lessI by blast
then have "\<exists> y. Derives1 a i r y \<and> Derivation y ?D (u @ v)" by simp
then obtain y where ay:"Derives1 a i r y"
and yuv: "Derivation y ?D (u @ v)" by blast
have length_D': "length ?D = n" using D_split Suc.hyps(2) by auto
from Suc(1)[OF length_D'[symmetric] yuv Suc(4)]
obtain w where "leftderives y (u @ w)" and "(v = [] \<longrightarrow> w = [])"
and "\<forall>X. is_first X v \<longrightarrow> is_first X w" by blast
then show ?thesis using ay i leftmost_Derives1_leftderives by blast
qed
qed
lemma derives_implies_leftderives_gen: "derives a (u@v) \<Longrightarrow> is_word u \<Longrightarrow> (\<exists> w.
leftderives a (u@w) \<and>
(v = [] \<longrightarrow> w = []) \<and>
(\<forall> X. is_first X v \<longrightarrow> is_first X w))"
using Derivation_implies_leftderives_gen derives_implies_Derivation by blast
lemma derives_implies_leftderives: "derives a b \<Longrightarrow> is_word b \<Longrightarrow> leftderives a b"
using derives_implies_leftderives_gen by force
fun LeftDerivation :: "sentence \<Rightarrow> derivation \<Rightarrow> sentence \<Rightarrow> bool"
where
"LeftDerivation a [] b = (a = b)"
| "LeftDerivation a (d#D) b = (\<exists> x. LeftDerives1 a (fst d) (snd d) x \<and> LeftDerivation x D b)"
lemma LeftDerives1_implies_Derives1: "LeftDerives1 a i r b \<Longrightarrow> Derives1 a i r b"
by (metis LeftDerives1_def)
lemma LeftDerivation_implies_Derivation:
"LeftDerivation a D b \<Longrightarrow> Derivation a D b"
proof (induct D arbitrary: a)
case (Nil) thus ?case by simp
next
case (Cons d D)
thus ?case
using CFG.LeftDerivation.simps(2) CFG_axioms Derivation.simps(2)
LeftDerives1_implies_Derives1 by blast
qed
lemma LeftDerivation_implies_leftderives: "LeftDerivation a D b \<Longrightarrow> leftderives a b"
proof (induct D arbitrary: a b)
case Nil thus ?case by simp
next
case (Cons d D)
note ihyps = this
from ihyps have "\<exists> x. LeftDerives1 a (fst d) (snd d) x \<and> LeftDerivation x D b" by auto
then obtain x where "LeftDerives1 a (fst d) (snd d) x" and xb: "LeftDerivation x D b" by blast
with LeftDerives1_implies_leftderives1 have d1: "leftderives a x" by auto
from ihyps xb have d2:"leftderives x b" by simp
show "leftderives a b" by (rule leftderives_trans[OF d1 d2])
qed
lemma leftmost_witness[simp]: "leftmost (length x) (x@(N#y)) = (is_word x \<and> is_nonterminal N)"
by (auto simp add: leftmost_def)
lemma leftderives1_implies_LeftDerives1:
assumes leftderives1: "leftderives1 u v"
shows "\<exists> i r. LeftDerives1 u i r v"
proof -
from leftderives1 have
"\<exists>x y N \<alpha>. u = x @ [N] @ y \<and> v = x @ \<alpha> @ y \<and> is_word x \<and> is_sentence y \<and> (N, \<alpha>) \<in> \<RR>"
by (simp add: leftderives1_def)
then obtain x y N \<alpha> where
u:"u = x @ [N] @ y" and
v:"v = x @ \<alpha> @ y" and
x:"is_word x" and
y:"is_sentence y" and
r:"(N, \<alpha>) \<in> \<RR>"
by blast
show ?thesis
apply (rule_tac x="length x" in exI)
apply (rule_tac x="(N, \<alpha>)" in exI)
apply (auto simp add: LeftDerives1_def)
apply (simp add: leftmost_def x u rule_nonterminal_type[OF r])
apply (simp add: Derives1_def u v)
apply (rule_tac x=x in exI)
apply (rule_tac x=y in exI)
apply (auto simp add: x y r)
done
qed
lemma LeftDerivation_LeftDerives1:
"LeftDerivation a S y \<Longrightarrow> LeftDerives1 y i r z \<Longrightarrow> LeftDerivation a (S@[(i,r)]) z"
proof (induct S arbitrary: a y z i r)
case Nil thus ?case by simp
next
case (Cons s S) thus ?case
by (metis LeftDerivation.simps(2) append_Cons)
qed
lemma leftderives_implies_LeftDerivation: "leftderives a b \<Longrightarrow> \<exists> D. LeftDerivation a D b"
proof (induct rule: leftderives_induct)
case Base
show ?case by (rule exI[where x="[]"], simp)
next
case (Step y z)
note ihyps = this
from ihyps obtain D where ay: "LeftDerivation a D y" by blast
from ihyps leftderives1_implies_LeftDerives1 obtain i r where yz: "LeftDerives1 y i r z" by blast
from LeftDerivation_LeftDerives1[OF ay yz] show ?case by auto
qed
lemma LeftDerivation_append:
"LeftDerivation a (D@E) c = (\<exists> b. LeftDerivation a D b \<and> LeftDerivation b E c)"
proof(induct D arbitrary: a c E)
case Nil thus ?case by auto
next
case (Cons d D) thus ?case by auto
qed
lemma LeftDerivation_implies_append:
"LeftDerivation a D b \<Longrightarrow> LeftDerivation b E c \<Longrightarrow> LeftDerivation a (D@E) c"
using LeftDerivation_append by blast
lemma Derivation_unique_dest: "Derivation a D b \<Longrightarrow> Derivation a D c \<Longrightarrow> b = c"
apply (induct D arbitrary: a b c)
apply auto
using Derives1_unique_dest by blast
lemma Derivation_unique_src: "Derivation a D c \<Longrightarrow> Derivation b D c \<Longrightarrow> a = b"
apply (induct D arbitrary: a b c)
apply auto
using Derives1_unique_src by blast
lemma LeftDerives1_unique: "LeftDerives1 a i r b \<Longrightarrow> LeftDerives1 a j s b \<Longrightarrow> i = j \<and> r = s"
using Derives1_def LeftDerives1_def leftmost_unique by auto
lemma leftlang: "\<L> = { v | v. is_word v \<and> is_leftderivation v }"
by (metis (no_types, lifting) CFG.is_derivation_def CFG.is_leftderivation_def
CFG.leftderivation_implies_derivation CFG_axioms Collect_cong
\<L>_def derives_implies_leftderives)
lemma leftprefixlang: "\<L>\<^sub>P = { u | u v. is_word u \<and> is_leftderivation (u@v) }"
apply (auto simp add: \<L>\<^sub>P_def)
using derives_implies_leftderives_gen is_derivation_def is_leftderivation_def apply blast
using leftderivation_implies_derivation by blast
lemma derives_implies_leftderives_cons:
"is_word a \<Longrightarrow> derives u (a@X#b) \<Longrightarrow> \<exists> c. leftderives u (a@X#c)"
by (metis append_Cons append_Nil derives_implies_leftderives_gen is_first_def)
lemma is_word_append[simp]: "is_word (a@b) = (is_word a \<and> is_word b)"
by (auto simp add: is_word_def)
lemma \<L>\<^sub>P_split: "a@b \<in> \<L>\<^sub>P \<Longrightarrow> a \<in> \<L>\<^sub>P"
by (auto simp add: \<L>\<^sub>P_def)
lemma \<L>\<^sub>P_is_word: "a \<in> \<L>\<^sub>P \<Longrightarrow> is_word a"
by (metis (no_types, lifting) leftprefixlang mem_Collect_eq)
definition Derive :: "sentence \<Rightarrow> derivation \<Rightarrow> sentence"
where
"Derive a D = (THE b. Derivation a D b)"
lemma Derivation_dest_ex_unique: "Derivation a D b \<Longrightarrow> \<exists>! x. Derivation a D x"
using CFG.Derivation_unique_dest CFG_axioms by blast
lemma Derive:
assumes ab: "Derivation a D b"
shows "Derive a D = b"
proof -
note the1_equality[OF Derivation_dest_ex_unique[OF ab] ab]
thus ?thesis by (simp add: Derive_def)
qed
end
end
| {"subset_name": "curated", "file": "formal/afp/LocalLexing/Derivations.thy"} |
TITLE: Prove $\prod_{k=1}^n(1+a_k)\leq1+2\sum_{k=1}^n a_k$
QUESTION [4 upvotes]: I want to prove
$$\prod_{k=1}^n(1+a_k)\leq1+2\sum_{k=1}^n a_k$$
if $\sum_{k=1}^n a_k\leq1$ and $a_k\in[0,+\infty)$
I have no idea where to start, any advice would be greatly appreciated!
REPLY [1 votes]: David C. Ullrich's answer shows a very simple way to do this. Here is an altenative which is more complicated but which can allows us to derive a sharper inequality if needed. We treat the problem as an optimization problem of maximizing the product $P = \prod_{k=1}^n(1+a_k)$ over $a_k\geq 0$ under the constraint $S = \sum_{k=1}^n a_k = r$ for some $0\leq r \leq 1$.
We put up the Lagrangian
$$L = P - \lambda(r-S)$$
The extemal point satisfies
$$\frac{\partial L}{\partial a_i} = \frac{P}{1+a_i} - \lambda \implies 1+a_i = \frac{P}{\lambda} \implies a_1=a_2=\ldots = \frac{r}{n}$$
This is a local maximum and we can rule out the possibillity of any maximum points lying on the boundary $a_i = 0$ so the point is a global maximum. This shows that
$$\max_{\matrix{\{a_i\}_{i=1}^n\in[0,\infty)\\\sum a_k = r}}\prod_{k=1}^n(1+a_k) = \left(1+\frac{r}{n}\right)^n \leq e^r$$
so the problem reduces to showing that $1+2r-e^r\geq 0$ for $r\in[0,1]$ which follows by, say, using Taylors theorem.
By the same approach we can deduce the slightly stronger result
$$\prod_{k=1}^n(1+a_k) \leq 1 + \frac{e^r-1}{r}\sum_{k=1}^n a_k~~~\text{when}~~~\sum_{k=1}^n a_k \leq r \leq 1$$
in particular for $r=1$ which is the problem at hand we can reduce the constant $2$ down to $e-1 \simeq 1.71$. | {"set_name": "stack_exchange", "score": 4, "question_id": 1772905} |
TITLE: What if the net force provided for a circular motion is larger than the required centripetal force?
QUESTION [5 upvotes]: Will an object be pulled towards the centre linearly if the net force provided for a circular motion is larger than the required centripetal force? And why?
For example, if the object in a circular motion which is connected by a string is pulled towards the centre by hand.
REPLY [19 votes]: Let's be more exact about this:
Newton's second law for planar motion in polar coordinates is given by
$$\mathbf F=m\left(\ddot r-r\dot\theta^2\right)\hat r+m\left(r\ddot\theta+2\dot r\dot\theta\right)\hat\theta$$
where $r$ is the radial coordinate and $\theta$ is the angle from the $x$-axis.
If we apply only a radially inward force $\mathbf F=-F\,\hat r$, then we end up with two coupled differential equations
$$\ddot r=r\dot\theta^2-\frac Fm$$
$$\ddot\theta=-\frac{2\dot r\dot\theta}{r}$$
Just to check, let's solve this problem for uniform circular motion first. For initial conditions we will use (I will leave off units on my numbers) $r(0)=10$, $\dot r(0)=0$, $\theta(0)=0$, $\dot\theta(0)=1$. Let's set $m=1$. For uniform circular motion, this means that we want $F=mr\dot\theta^2=1\cdot10\cdot1^2=10$. And of course we get uniform circular motiotion, as shown in the x-y plot below
So, now what if we keep our same initial conditions that we had in our uniform circular motion, and we suddenly double our force magnitude from $10$ to $20$? Well, unlike what other (now deleted) answers are saying, we don't get a spiral to the origin. We actually get oscillations in $r$, as shown below:
This makes sense. From a fictitious force perspective, the centrifugal force acting on the object will increase as it moves radially inward, thus there comes a point where the object is pulled outwards rather than inwards. Then the object will eventually move out, then back in, etc.
If we want to get to the center, let's try increasing the force over time. As a first pass, lets make the force magnitude a linearly increasing function of time that starts at our uniform circular motion force. For example, if $F=10\cdot(1+10\cdot t)$ we end up with this trajectory:
where the trajectory can get as close to the origin as you want as the force increases. However, there will still be oscillations in $r$. You will not get a perfect spiral with this type of force.
To gain more insight, let's reverse engineer how to get a spiral. As a simple first step, let's look at a spiral that goes inward with a constant linear radial speed and constant angular speed. This is easily described by the following equations (note that I'm using the variable $v$ here as the "inward speed", not in the usual sense like $v=r\omega$)
$$r(t)=r_0-vt$$
$$\theta(t)=\omega t$$
So we know the force acting on our object is given by
$$\mathbf F=m\left(\ddot r-r\dot\theta^2\right)\hat r+m\left(r\ddot\theta+2\dot r\dot\theta\right)\hat\theta=m\left(0-(r_0-vt)\omega^2\right)\hat r+m\left(0-2v\omega\right)\hat\theta$$
So, we want a force
$$\mathbf F=-m\omega^2(r_0-vt)\,\hat r-2mv\omega\,\hat\theta$$
So, this cannot be done with a string because $F_\theta\neq0$.
We are close though! More realistically, if we are actually pulling on a string by hand then we are likely directly controlling $r(t)$ while having $F_\theta=0$. So let's combine the two classes of scenarios covered above and say $\mathbf F=-F\hat r$ for our string and constrain $r(t)=r_0-vt$ to try and get an inward spiral. Then our equations of motion become
$$0=\dot\theta^2(r_0-vt)-\frac Fm$$
$$\ddot\theta=\frac{2v\dot\theta}{r_0-vt}$$
The second differential equation let's us determine $\dot\theta(t)$ as (notice how angular momentum is conserved, which is a nice sanity check)
$$\dot\theta(t)=\frac{r_0^2\dot\theta(0)}{(r_0-vt)^2}$$
And so the force we need is given by
$$F=m\dot\theta(t)^2(r_0-vt)=\frac{mr_0^4\dot\theta(0)^2}{(r_0-vt)^3}$$
We get a centripetal force that is increasing in magnitude, which is what we wanted. But notice how now it increases as $1/(r_0-vt)^3$ rather than just linearly with respect to $t$. Note that now we can only look at $t<r_0/v$ since crossing $t=r_0/v$ would make an infinite force.
So finally, let's answer your question
Will an object be pulled towards the centre linearly if the net force provided for a circular motion is larger than the required centripetal force? And why?
Assuming by "linearly" you mean with a constant radial speed, then the answer is yes as long as you increase the force in just the right way. This has a simple explanation in the frame rotating with the object: you are supplying just the right amount of force to balance the centrifugal force at all points in time. | {"set_name": "stack_exchange", "score": 5, "question_id": 534748} |
\begin{document}
\title{Representation type of ${}^{\infty}_{\hspace{2mm}\lambda}\mathcal{H}_{\mu}^1$}
\author{Yuriy Drozd and Volodymyr Mazorchuk}
\date{}
\maketitle
\begin{abstract}
For a semi-simple finite-dimensional complex Lie algebra $\mathfrak{g}$ we classify the
representation type of the associative algebras associated with the categories $\HH$ of
Harish-Chandra bimodules for $\mathfrak{g}$.
\end{abstract}
\section{The result}\label{s1}
Let $\mathfrak{g}$ be a simple finite-dimensional complex Lie algebra
with a fixed triangular decoposition, $\mathfrak{g}=\mathfrak{n}_-\oplus
\mathfrak{h}\oplus\mathfrak{n}_+$, let $\lambda$ and $\mu$ be two
dominant and integral (but not necessarily regular) weights, let
$U(\mathfrak{g})$ be the universal enveloping algebra of $\mathfrak{g}$,
and let $Z(\mathfrak{g})$ be the center of $U(\mathfrak{g})$.
Denote by $\chi_{\lambda}$ and $\chi_{\mu}$ the central characters of
the Verma modules $\Delta(\lambda)$ and $\Delta(\mu)$ respectively.
Let further $\HH$ denote the full subcategory of the category of
all $U(\mathfrak{g})$-bimodules, which consists of all $X$ satisfying
the following conditions (see \cite[Kapitel~6]{Ja}):
\begin{enumerate}[(1)]
\item $X$ is finitely generated as a bimodule;
\item $X$ is algebraic, that is $X$ is a direct sum of finite-dimensional
$\mathfrak{g}$-modules with respect to the diagonal action
$g\mapsto(g,\sigma(g))$, where $\sigma$ is the Chevalley involution on
$\mathfrak{g}$;
\item $x(z-\chi_{\mu}(z))=0$ for all $x\in X$ and $z\in Z(\mathfrak{g})$;
\item for every $x\in X$ and $z\in Z(\mathfrak{g})$ there exists
$k\in\mathbb{N}$ such that $(z-\chi_{\lambda}(z))^kx=0$.
\end{enumerate}
For regular $\mu$ the category $\HH$ is equivalent to a block of the
BGG category $\mathcal{O}$, associated with the triangular decomposition above,
see \cite{BGG}. For singular $\mu$ the category $\HH$ is equivalent to
a block of the parabolic generalization $\mathcal{O}(\mathfrak{p},\Lambda)$
of $\mathcal{O}$, studied in \cite{FKM}. Moreover, from \cite{FKM,So1} it follows
that every block of $\mathcal{O}$ and $\mathcal{O}(\mathfrak{p},\Lambda)$ is
equivalent to some $\HH$. Every $\HH$ is equivalent to the module category of a
properly stratified finite-dimensional associative algebra. The regular blocks
of $\HH$ can be used to categorify a parabolic Hecke module, see \cite{MS}.
Let $\mathbf{W}$ be the Weyl group of $\mathfrak{g}$ and $\rho$ be the half
of the sum of all positive roots of $\mathfrak{g}$. Then $\mathbf{W}$ acts on
$\mathfrak{h}^*$ in the usual way and we recall the following {\em dot-action}
of $\mathbf{W}$ on $\mathfrak{h}^*$: $w\cdot \nu=w(\nu+\rho)-\rho$.
Let $\mathbf{G}\subset \mathbf{W}$ be the stabilizer of $\lambda$
with respect to the dot-action, and $\mathbf{H}\subset \mathbf{W}$ be the
stabilizer of $\mu$ with respect to the dot-action. We will say that the triple
$(\mathbf{W},\mathbf{G},\mathbf{H})$ is associated to $\HH$. In the present
paper we classify the categories $\HH$ according to their representation type
in terms of the associated triples, thus extending the results of \cite{FPN,BKM,GP}.
Let $(\mathbf{W},\mathbf{G},\mathbf{H})$ be the triple, associated to $\HH$, and
$(\mathbf{W},\mathbf{G}',\mathbf{H}')$ be the triple, associated to some
${}^{\infty}_{\hspace{0.7mm}\lambda'}\mathcal{H}_{\mu'}^1$. Then
from \cite[Theorem~5.9]{BG} and \cite[Theorem~11]{So1} it follows that
$\HH$ and ${}^{\infty}_{\hspace{0.7mm}\lambda'}\mathcal{H}_{\mu'}^1$ are equivalent
if there exists an automorphism, $\varphi$, of the Coxeter system $(\mathbf{W},S)$,
where $S$ is the set of simple reflections associated to our triangular decomposition,
such that $\varphi(\mathbf{G})=\mathbf{G}'$ and $\varphi(\mathbf{H})=\mathbf{H}'$.
By the {\em Coxeter type} of a triple, $(\mathbf{W},\mathbf{G},\mathbf{H})$, we mean the
triple that consists of the Coxeter types of the corresponding components of
$(\mathbf{W},\mathbf{G},\mathbf{H})$. Note that, in general, the Coxeter type of the
triple does not determine the triple in a unique way (for example, one can compare
the cases \eqref{tm.1.5}, \eqref{tm.2.4} and \eqref{tm.2.5} in the formulation
of Theorem~\ref{tmain} below). Our main result is the following statement:
\begin{theorem}\label{tmain}
\begin{enumerate}[(1)]
\item\label{tm.1} The category $\HH$ is of finite type
if and only if the Coxeter type of the associated triple is
\begin{enumerate}[(a)]
\item\label{tm.1.1} any and $\mathbf{W}=\mathbf{G}$;
\item\label{tm.1.2} $(A_n,A_{n-1},A_n)$, $(B_n,B_{n-1},B_n)$, $(C_n,C_{n-1},C_n)$, or
$(G_2,A_1,G_2)$;
\item\label{tm.1.3} $(A_1,e,e)$;
\item\label{tm.1.4} $(A_n,A_{n-1},A_{n-1})$;
\item\label{tm.1.5} $(A_n,A_{n-1},A_{n-2})$, where $A_{n-2}$ is obtained from $A_{n}$
by taking away the first and the last roots;
\item\label{tm.1.6} $(B_2,A_{1},A_{1})$ or $(C_2,A_1,A_1)$, and
$\mathbf{G}=\mathbf{H}$ (in both cases);
\item\label{tm.1.7} $(B_n,B_{n-1},B_{n-1})$ or $(C_n,C_{n-1},C_{n-1})$, where $n\geq 3$;
\item\label{tm.1.8} $(A_2,A_{1},e)$.
\end{enumerate}
\item\label{tm.2} The category $\HH$ is tame if and only if the Coxeter type of
the associated triple is
\begin{enumerate}[(a)]
\item\label{tm.2.1} $(A_3,A_1\times A_1,A_3)$, $(A_2,e,A_2)$, $(B_2,e,B_2)$,
$(G_2,e,G_2)$, $(B_3,A_2,B_3)$, $(C_3,A_2,C_3)$, or $(D_n,D_{n-1},D_n)$ where $n\geq 4$;
\item\label{tm.2.2} $(B_2,A_1,A_1)$ or $(C_2,A_1,A_1)$, and $\mathbf{G}\neq\mathbf{H}$
(in both cases);
\item\label{tm.2.3} $(A_n,A_{n-1},A_{1}\times A_{n-2})$, $n>2$;
\item\label{tm.2.4} $(A_n,A_{n-1},A_{n-2})$, $n>2$, where $A_{n-2}$ is included into
$A_{n-1}$ and contains either the first or the last root of $A_n$;
\item\label{tm.2.5} $(A_n,A_{n-1},A_{n-2})$, $n>2$, where $A_{n-2}$ is not included into
$A_{n-1}$;
\item\label{tm.2.7} $(A_3,A_2,e)$, $(B_2,A_1,e)$, $(C_2,A_1,e)$.
\end{enumerate}
\item\label{tm.3} In all other cases the category $\HH$ is wild.
\end{enumerate}
\end{theorem}
For regular $\mu$ Theorem~\ref{tmain} gives the classification of the
representation type of the blocks of the category $\mathcal{O}$ obtained in
\cite{FPN} (see also \cite{BKM} for a different proof). Formally, we do not use any
results from \cite{FPN} and \cite{BKM}, however, the main idea of our proof is
similar to the one of \cite{BKM}.
In the case $\mathbf{H}=\mathbf{W}$ (i.e. $\mu$ is most singular) Theorem~\ref{tmain}
reduces to the classification of the representation type for the algebra
$\mathtt{C}(\mathbf{W},\mathbf{G})$ of $\mathbf{G}$-invariants in the coinvariant
algebra associated to $\mathbf{W}$. This result was obtained in \cite{GP} and, in fact,
our argument in the present paper is based upon it.
The last important ingredient in the proof of Theorem~\ref{tmain}, the latter being
presented in Section~\ref{s3}, is the classification of the representation type of all
centralizer subalgebras in the Auslander algebra $\mathtt{A}_n$ of $\Bbbk[x]/(x^n)$.
This classification is given in Section~\ref{s2}. Two series of centralizer subalgebras,
namely those considered in Lemma~\ref{lh7} and Lemma~\ref{lh8}, seem to be rather
interesting and non-trivial.
The paper finishes with an extension of Theorem~\ref{tmain} to the case of a semi-simple
Lie algebra $\mathfrak{g}$. This is presented in Section~\ref{s4}, where one more
interesting tame algebra arises.
We would like to finish the introduction with a remark that just recently a first step towards
the classification of the representation type of the blocks of Rocha-Caridi's parabolic
analogue $\mathcal{O}_S$ of $\mathcal{O}$ was made in \cite{BN}. The next step would be to
complete this classification and then to classify the representation type of the ``mixed''
version of $\mathcal{O}_S$ and $\mathcal{O}(\mathfrak{p},\Lambda)$. As the results of
\cite{BN} and of the present paper suggest, this might give some interesting tame algebras
in a natural way.
\section{Representation type of the centralizer subalgebras in the Auslander
algebra of $\Bbbk[x]/(x^n)$}\label{s2}
In the paper we will compose arrows of the quiver algebras from the right to the
left. Let $\Bbbk$ be an algebraically closed field. Recall that, according to
\cite{Dr3}, every finite-dimensional associative $\Bbbk$-algebra has either finite,
tame or wild representation type. In what follows we will call the latter statement
the {\em Tame and Wild Theorem}. The algebras, which are not of finite representation
type, are said to be of infinite representation type.
Let $A=(A_{ob},A_{mor})$ be a $\Bbbk$-linear category. An $A$-module, $M$, is a functor
from $A$ to the category of $\Bbbk$-vector spaces. In particular, for $x\in A_{ob}$ and
$\alpha\in A_{mor}$ we will denote by $M(x)$ and $M(\alpha)$ the images of $x$ and $\alpha$
under $M$ respectively.
For a positive integer $n>1$ let $\mathtt{A}_n$ be the algebra given by the
following quiver with relations:
\begin{displaymath}
\xymatrix{
1\ar@/^/[rr]^{a_1} && 2\ar@/^/[rr]^{a_2}\ar@/^/[ll]^{b_1} && \dots
\ar@/^/[ll]^{b_2}\ar@/^/[rr]^{a_{n-1}} && n\ar@/^/[ll]^{b_{n-1}}
}\quad\quad
\begin{array}{ll}
a_{i}b_{i}=b_{i+1}a_{i+1}, & i=1,\dots,n-2,\\
a_{n-1}b_{n-1}=0.
\end{array}
\end{displaymath}
The algebra $\mathtt{A}_n$ is the Auslander algebra of $\Bbbk[x]/(x^n)$
(see for example \cite[Section~7]{DR}).
For $X\subset \{2,3,\dots,n\}$ let $e_X$ denote the direct sum of all primitive
idempotents of $\mathtt{A}_n$, which corrrespond to the vertexes from $\{1\}\cup X$.
Set $\mathtt{A}_n^{X}=e_X\mathtt{A}_n e_X$. The main result of this section is the
following:
\begin{theorem}\label{taus}
\begin{enumerate}[(i)]
\item\label{taus.1} The algebra $\mathtt{A}_n^{X}$ has finite representation type
if and only if $X\subset \{2,n\}$.
\item\label{taus.2} The algebra $\mathtt{A}_n^{X}$ has tame representation type if
and only if either $n>3$ and $X=\{3\}$, $\{2,3\}$, $\{n-1\}$, $\{n-1,n\}$, or
$n=4$ and $X=\{2,3,4\}$.
\item\label{taus.3} The algebra $\mathtt{A}_n^{X}$ is wild in all other cases.
\end{enumerate}
\end{theorem}
To prove Theorem~\ref{taus} we will need the following lemmas:
\begin{lemma}\label{lh1}
The algebra $\mathtt{A}^{\{m\}}_n$ has infinite representation type for
$m\in\{3,\dots,n-1\}$ and $n\geq 4$.
\end{lemma}
\begin{proof}
The algebra $\mathtt{A}^{\{m\}}_n$ is given by the following quiver with relations:
\begin{equation}\label{eqh1}
\xymatrix{
1\ar@/^/[rr]^a\ar@(ul,dl)[]_{x} && m\ar@/^/[ll]^b\ar@(ur,dr)[]^{y}
} \quad\quad\quad
\begin{array}{ll}
ax=ya, & xb=by,\\
ab=y^{m-1}, & ba=x^{m-1},\\
y^{n-m+1}=0,
\end{array}
\end{equation}
where $x=b_1a_1$, $y=b_ma_m$, $a=a_{m-1}\dots a_1$, $b=b_1\dots b_{m-1}$. Modulo the
square of the radical $\mathtt{A}^{\{m\}}_n$ gives rise to the following diagram of infinite type:
\begin{displaymath}
\xymatrix{
1\ar@{-}[rrd]\ar@{-}[d] && m\ar@{-}[d] \\
1\ar@{-}[rru] && m
}.
\end{displaymath}
Hence $\mathtt{A}^{\{m\}}_n$ has infinite representation type as well.
\end{proof}
\begin{lemma}\label{lh2}
The algebra $\mathtt{A}^X_n$ is wild for $X=\{3,m\}$, where $m>4$.
\end{lemma}
\begin{proof}
In this case the algebra $\mathtt{A}^X_n$ is given by the following quiver with relations:
\begin{displaymath}
\xymatrix{
1\ar@/^/[rr]^a\ar@(ul,dl)[]_{x} && 3 \ar@/^/[ll]^b\ar@/^/[rr]^s\ar@(ul,ur)[]^{y}
&& m\ar@/^/[ll]^t\ar@(ur,dr)[]^{z}
} \quad\quad
\begin{array}{ll}
ax=ya, & xb=by,\\
sy=zs, & yt=tz,\\
ab=y^2,& ba=x^2,\\
st=z^{m-3},&ts=y^{m-3},\\
z^{n-m+1}=0,
\end{array}
\end{displaymath}
where $x=b_1a_1$, $y=b_3a_3$, $z=b_ma_m$, $a=a_2a_1$, $b=b_1b_2$,
$s=a_{m-1}\dots a_3$, $t=b_3\dots b_{m-1}$. Note that $z=0$ if $m=n$.
Modulo the square of the radical $\mathtt{A}^{X}_n$ gives rise to the following diagram:
\begin{displaymath}
\xymatrix{
1\ar@{-}[rrd]\ar@{-}[d] && 3\ar@{-}[d]\ar@{-}[rrd] && m\ar@{--}[d] \\
1\ar@{-}[rru] && 3\ar@{-}[rru] && m
}
\end{displaymath}
(where the dashed line disappears in the case $m=n$). With or without the dashed
line the diagram is not an extended Dynkin quiver and hence is wild
(see \cite{DF,DR0}). Hence $\mathtt{A}^{X}_n$ is wild as well.
\end{proof}
\begin{lemma}\label{lh3}
The algebra $\mathtt{A}^X_n$ is wild for $X=\{2,n-1\}$ and $n\geq 5$.
\end{lemma}
\begin{proof}
To make the quivers in the proof below look better we set $m=n-1$.
The algebra $\mathtt{A}^X_n$ is given by the following quiver with relations:
\begin{displaymath}
\xymatrix{
1\ar@/^/[rr]^a && 2 \ar@/^/[ll]^b\ar@/^/[rr]^s
&& m\ar@/^/[ll]^t\ar@(ur,dr)[]^{x}
} \quad\quad
\begin{array}{ll}
sab=xs, & abt=tx,\\
st=0, &ts=(ab)^{n-3},\\
x^{2}=0,
\end{array}
\end{displaymath}
where $a=a_1$, $b=b_1$, $s=a_{n-2}\dots a_2$, $t=b_2\dots b_{n-2}$, $x=b_{n-1}a_{n-1}$.
The universal covering of $\mathtt{A}^{X}_n$ has the wild fragment (a hereditary algebra,
whose underlined quiver is not an extended Dynkin diagram, see \cite{DF,DR0}) indicated by the
dotted arrows in the following picture:
\begin{displaymath}
\xymatrix{
\dots && \dots &&&& \dots &&&& \\
1\ar[rr]^{a} && 2 \ar[rrrr]^{s}\ar[lld]_{b}&&&& m \ar@{.>}[d]^{x}
\ar@{.>}[dddllll]|->>>>>>>>>>{t}\\
1\ar@{.>}[rr]^{a} && 2 \ar@{.>}[rrrr]^{s}\ar@{.>}[lld]_{b}&&&& m
\ar[d]^{x}\ar[dddllll]|->>>>>>>>>>{t}\\
1\ar@{.>}[rr]^{a} && 2 \ar[rrrr]^{s}\ar@{.>}[lld]_{b}&&&& m \ar[d]^{x}\\
1\ar@{.>}[rr]^{a} && 2 \ar[rrrr]^{s}\ar[lld]_{b}&&&& m \ar[d]^{x}\\
1\ar[rr]^{a} && 2 \ar[rrrr]^{s}&&&& m \\
\dots && \dots &&&& \dots\\
}
\end{displaymath}
Hence $\mathtt{A}^{X}_n$ is wild as well.
\end{proof}
\begin{lemma}\label{lh4}
The algebra $\mathtt{A}^{\{3,4\}}_5$ is wild.
\end{lemma}
\begin{proof}
The algebra $\mathtt{A}^{\{3,4\}}_5$ is given by the following quiver with relations:
\begin{displaymath}
\xymatrix{
1\ar@/^/[rr]^a \ar@(ul,dl)[]^{x} && 3 \ar@/^/[ll]^b\ar@/^/[rr]^s
&& 4\ar@/^/[ll]^t
} \quad\quad
\begin{array}{ll}
ax=tsa, & xb=bts,\\
ba=x^2,&ab=(ts)^{2},\\
(st)^{2}=0,
\end{array}
\end{displaymath}
where $a=a_2a_1$, $b=b_1b_2$, $s=a_3$, $t=b_3$, $x=b_{1}a_{1}$.
The universal covering of $\mathtt{A}^{\{3,4\}}_5$ has the wild fragment (a hereditary algebra,
whose underlined quiver is not an extended Dynkin diagram, see \cite{DF,DR0}) indicated by the
dotted arrows in the following picture:
\begin{displaymath}
\xymatrix{
\dots &&&& \dots && \dots &&&& \\
1\ar[rrrr]^{a}\ar[d]_x &&&& 3\ar@{.>}[rr]^{s}\ar@{.>}[ddllll]|->>>>>>>{b}
&& 4\ar@{.>}[dll]|-t\\
1\ar@{.>}[rrrr]^>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>{a}\ar@{.>}[d]_x
&&&& 3\ar@{.>}[rr]^{s} && 4\ar[dll]|-t\\
1\ar[rrrr]^{a} &&&& 3\ar[rr]^{s} && 4\\
\dots &&&& \dots && \dots\\
}
\end{displaymath}
Hence $\mathtt{A}^{\{3,4\}}_5$ is wild as well.
\end{proof}
\begin{lemma}\label{lh6}
The algebra $\mathtt{A}^{\{m\}}_n$ is wild for $m\in\{4,\dots,n-2\}$
and $n\geq 6$.
\end{lemma}
\begin{proof}
The algebra $\mathtt{A}^{\{m\}}_n$ is given by \eqref{eqh1}. We consider its quotient
$\mathtt{B}$ given by the additional relations $x^3=y^3=ab=ba=0$ (which is possible because of
our restrictions on $m$ and $n$). Then the universal covering of $\mathtt{B}$ exists
and has the following fragment,
\begin{displaymath}
\xymatrix{
m\ar[dr]^y && 1\ar[dr]^x\ar[dl]_a\ar@{-->}[dd] \\
&\ar[dr]^y m && 1\ar[dr]^x\ar[dl]_a && m\ar[dr]^y\ar[dl]_b && 1\ar[dr]^x\ar[dl]_a \\
&& m && 1 && m && 1
},
\end{displaymath}
which is wild by \cite{Un}.
This implies that $\mathtt{B}$ and hence $\mathtt{A}^{\{m\}}_n$ is wild.
\end{proof}
\begin{lemma}\label{lh7}
The algebra $\mathtt{A}^{\{2,n\}}_n$, $n\geq 2$,
is of finite representation type.
\end{lemma}
\begin{proof}
For $n=2,3$ the statement follows from \cite[Section~7]{DR}.
The algebra $\mathtt{A}^{\{2,n\}}_n$, $n\geq 4$, is given by the
following quiver with relations:
\begin{equation}\label{equequ}
\xymatrix{ 1 \ar@/^/[r]^a & 2 \ar@/^/[l]^b \ar@/^/[r]^u & n \ar@/^/[l]^v }
\qquad uv=uab=abv=0,\ vu=(ab)^{n-2},
\end{equation}
where $a=a_1$, $b=b_1$, $u=a_{n-1}\dots a_2$, $v=b_2\dots b_{n-1}$. Note that these
relations imply $(ab)^{n-1}=(ba)^n=0$. The projective $\mathtt{A}^{\{2,n\}}_n$-module
$P(1)$ is injective, so we can replace $\mathtt{A}^{\{2,n\}}_n$ by
$\mathtt{A}'=\mathtt{A}^{\{2,n\}}_n/\mathrm{soc}(P(1))= \mathtt{A}^{\{2,n\}}_n/((ba)^{n-1})$,
which has the same indecomposable modules except $P(1)$, see \cite[Lemma~9.2.2]{DK}. So from
now on we consider the algebra $\mathtt{A}'$, i.e. add the relation $(ba)^{n-1}=0$ to
\eqref{equequ}. The algebra $\mathtt{A}'$ has a simply connected covering
$\widetilde{\mathtt{A}}$, see \cite{BoGa}, which is the category, given by the following
quiver with relations (we show the case $n=5$, in the general case the arrow starting at
$n_k$ ends at $2_{n-2+k}$):
\begin{displaymath}
\xymatrix@R=1.5ex{
&&&\ar[ddl]|-v&& \\
\vdots & \ar[dl]|-b &\vdots&&&\vdots \\
1_0 \ar@/_4ex/@{.}[dddddddd] \ar[rr]|-a && 2_0 \ar[rrr]|-u \ar[ddll]|-b \ar@/_4ex/@{.}[dddddd]
\ar@{.}[ddrrr] &\ar[ddl]|-v&& n_0 \ar[ddddddlll]|->>>>>>>>>>>>>>>>>v
\ar@/^3ex/@{.}[dddddd] \ar@{.}[ddddddddlll]\\
&&&& &&\\
1_1 \ar[rr]|-a && 2_1 \ar[rrr]|-u \ar[ddll]|-b \ar@{.}[ddrrr] \ar@/_4ex/@{.}[dddddd]
&\ar[ddl]|-v&& n_1 \ar[ddddddlll]|->>>>>>>>>>>>>>>>>v \ar@/^3ex/@{.}[dddddd] \\
&&&&&& \\
1_2 \ar[rr]|-a && 2_2 \ar[rrr]|->>>>>>>>>>>u \ar[ddll]|-b \ar@{.}[ddrrr]&&& n_2 \ar[ddl]|-v \\
&&& &&&\\
1_3 \ar[rr]|-a && 2_3 \ar[rrr]|-u \ar[ddll]|-b \ar@{.}[ddrrr]&&& n_3 \ar[ddl]|-v \\
&&& &&&\\
1_4 \ar[rr]|-a && 2_4 \ar[rrr]|-u \ar[dl]|-b &&& n_4 \ar[ddl]|-v \\
\vdots&&\vdots&&&\vdots \\
&&&&& \\
}
\end{displaymath}
We omit the indices at the arrows $a,b,u,v$. They satisfy the same relations
as in $\mathtt{A}'$, which are shown by the dotted lines. Consider the full subcategory
$\mathtt{B}_m$ of $\widetilde{\mathtt{A}}$ with the set of objects
$\mathbf{S}=\{1_k, m\le k\leq m+n-1;\,\,2_k, m\leq k\leq m+n-2;\,\,n_m\}$.
Let $M$ be an $\widetilde{\mathtt{A}}$-module, $N_m$ be its restriction to
$\mathtt{B}_m$, $N_m=\bigoplus_{i=1}^sK_i$, where $K_i$ are indecomposable
$\mathtt{B}_m$-modules. It is well known that every $K_i$ is completely determined by
the subset of objects $\mathbf{S}_i=\{x\,|\,K_i(x)\ne0\}$ and if $1_m\in\mathbf{S}_i$,
then $1_{m+n-1}\notin\mathbf{S}_i$. Moreover, all $K_i(x)$ with $x\in\mathbf{S}_i$ are
one-dimensional and all arrows between these objects correspond to the identity maps.
Since $uab=abv=0$, $K_i$ splits out of the whole module $M$ whenever
$\mathbf{S}_i\supseteq\{2_m,2_{m+n-2}\}$. Suppose that, for every integer $m$, $N_m$ does
not contain such direct summands. It implies that $M(vu)=0$. Therefore $M$ can be
considered as a module over $\overline{\mathtt{A}}$, where $\overline{\mathtt{A}}$ is given
by the following quiver
\begin{displaymath}
\xymatrix{ \dots & n' \ar[d]^v && n' \ar[d]^v & \dots & n' \ar[d]^v &\dots\\
\dots\ 1 \ar[r]^a & 2 \ar[r]^b\ar[d]^u & 1 \ar[r]^a & 2 \ar[r]^b \ar[d]^u
&\dots \ar[r]^a & 2\ar[r]^b \ar[d]^u & 1\ \dots \\
\dots & n && n & \dots & n &\dots }
\end{displaymath}
with relations $ uv=uab=abv=(ab)^{n-2}=0$. One easily checks that any indecomposable
representation of $\overline{\mathtt{A}}$ is at most of dimension $2n-5$. Hence,
$\overline{\mathtt{A}}$ is representation (locally) finite, i.e. for every object
$x\in\overline{\mathtt{A}}$ there are only finitely many indecomposable
representations $M$ with $M(x)\ne0$. By \cite{BoGa}, the algebra $\mathtt{A}^{\{2,n\}}_n$
is representation (locally) finite as well, which completes the proof.
\end{proof}
\begin{lemma}\label{lh8}
The algebra $\mathtt{A}^{\{n-1,n\}}_n$, $n>3$, is tame.
\end{lemma}
\begin{proof}
For $q=n-1$ the algebra $\mathtt{A}_n^{\{q,n\}}$ is given by the
following quiver with relations
\begin{displaymath}
\xymatrix{ 1 \ar@(ul,dl)[]_c \ar@/^/[r]^u & {q} \ar@/^/[l]^v \ar@/^/[r]^a &
n \ar@/^/[l]^b }\quad\quad c^n=ab=uv=0,\ vu=c^{n-2},\ cv=vba,\ uc=bau,
\end{displaymath}
where $c=b_1a_1,\,a=a_{q},\,b=b_{q},\,u=a_{n-2}\dots a_1,\,v=b_1\dots b_{n-2}$.
The projective module $P(1)$ is also injective, hence, using \cite[Lemma~9.2.2]{DK} as
it was done in the proof of Lemma~\ref{lh7}, we can replace $\mathtt{A}$
by $\mathtt{A}'=\mathtt{A}/\mathrm{soc}(P(1))=\mathtt{A}/(c^{q})$. Let $M$ be an
$\mathtt{A}'$-module. Choose a basis in $M(1)$ so that the matrix $C=M(c)$ is in the
Jordan normal form, or, further,
\begin{displaymath}
M(c)=\bigoplus_{i=1}^{q} J_i\otimes I_{m_i},
\end{displaymath}
where $J_i$ is the nilpotent Jordan block of size $i\times i$ and $I_{m_i}$ is the identity
matrix of size $m_i\times m_i$ (here $m_i$ is just the number of Jordan blocks of size $i$).
Thus
\begin{displaymath}
J_i\otimes I_m = \begin{pmatrix}
0 & I_m & 0 & \dots & 0 & 0 \\ 0& 0& I_m & \dots & 0 & 0 \\ \hdotsfor 6 \\
0&0&0&\dots &0&I_m \\ 0&0&0&\dots &0&0
\end{pmatrix}_{i\times i}
\end{displaymath}
(here $i\times i$ means $i$ boxes times $i$ boxes, each of size $m_i$). Choose bases
in $M(q)$ and $M(n)$ such that the matrices $A=M(a)$ and $B=M(b)$ are of the form
\begin{displaymath}
A= \begin{pmatrix}
0 & 0 & 0 & I & 0 \\ 0 & 0 & 0 & 0 & I \\ 0&0&0&0&0 \\ 0&0&0&0&0
\end{pmatrix}, \qquad
B= \begin{pmatrix}
0 & I & 0 & 0 \\0 & 0 & 0 & I \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0
\end{pmatrix},
\end{displaymath}
where the vertical (horizontal) stripes of $A$ are of the same size as the horizontal
(respectively, vertical) stripes of $B$, and $I$ is the identity matrix;
we do not specify these sizes here. Set $r=nq/2$; it is the number of the horizontal
and vertical stripes in $C$. Then $M(u)$
and $M(v)$ can be considered as block matrices: $M(u)=U=(U_k^{ij})_{5\times r}$
and $M(v)=V=(V_{ij}^k)_{r\times5}$,
where $k=1,\dots,5$ correspond to the $k$-th horizontal stripe of $B$;
$i=1,\dots,q,\ j=1,\dots,i$, and the stripe $(ij)$ corresponds to the $j$-th horizontal
stripe of the matrix $J_i\otimes I_{m_i}$ in the decomposition of $C$. The conditions
$uc=bau$ and $cv=vba$ imply that for $i>1$ the only nonzero blocks $U_k^{ij}$ and
$V_{ij}^k$ can be
\begin{align*}
U_k^{ii}\quad &\text{ and }\ U_1^{i,i-1}=U_5^{ii} \\
V^k_{i1}\quad &\text{ and }\ V^5_{i2}=V^1_{i1}.
\end{align*}
Moreover, we also have $U_5^{11}=V^1_{11}=0$. Changing bases in the spaces $M(x)$,
$x=1,q,n$, so that the matrices $A,B,C$ remain of the same form, we can
replace $U$ and $V$ respectively by $T^{-1}US$ and $S^{-1}VT$, where $S,T$ are
invertible matrices of the appropriate sizes such that $SA=AS$ and $TU=UQ,\,QV=VT$
for an invertible matrix $Q$. We also consider $S$ and $T$ as block matrices:
$S=(S_{st}^{ij})_{r\times r}$ and $T=(T_l^k)_{5\times5}$ with respect to the division
of $A,B,C$. Then the conditions above can be rewritten as follows:
\begin{itemize}
\item $S_{st}^{ij}$ can only be nonzero if $i-j<s-t$ or $i-j=s-t,\,s\le i$;
\item $S_{st}^{ij}=S_{st'}^{ij'}$ if $t-j=t'-j'$;
\item $T$ is block triangular: $T_l^k=0$ if $k<l$, and $T^1_1=T^5_5$;
\item all diagonal blocks $S^{ij}_{ij}$ and $T_k^k$ are invertible.
\end{itemize}
Especially, for the vertical stripes $U^{ii}$ and for the horizontal stripes $U_k$
of the matrix $U$ the following transformations are allowed:
\begin{enumerate}
\item\label{ob1} Replace $U^{ii}$ by $U^{ii}Z$.
\item\label{ob2} Replace $U_k$ by $ZU_k$, where $k=2,3,4$.
\item\label{ob3} Replace $U_1$ and $U_5$ respectively by $ZU_1$ and $ZU_5$.
\item\label{ob4} Replace $U^{ii}$ by $U^{ii}+U^{jj}Z$, where $j<i$.
\item\label{ob5} Replace $U_k$ by $U_k+U_lZ$, where $k<l$.
\end{enumerate}
Here $Z$ denotes an arbitrary matrix of the appropriate size, moreover, in the
cases \ref{ob1}--\ref{ob3}
it must be invertible. One can easily see that, using these transformations,
one can subdivide all blocks $U^{ii}_k$ into subblocks so that each stripe
contains at most one nonzero block, which is an identity matrix. Note that the
sizes of the horizontal substripes of $U_1$ and $U_5$ must be the same.
Let $\Lambda^{ii}$ and $\Lambda_k$ be respectively the sets of the vertical and the
horizontal stripes of these subdivisions. Note that all stripes $U^{ij}$ must
be subdivided respectively to the subdivision of $U^{ii}$ and recall that
$U_1^{i,i-1}=U_5^{ii}$. Especially, there is a one-to-one correspondence
$\lambda\mapsto \lambda'$ between $\Lambda_5$ and $\Lambda_1$.
We make the respective subdivision of the blocks of the matrix $V$, too.
The condition $UV=0$ implies that, whenever the $\lambda$-th vertical stripe of $U$
is nonzero ($\lambda\in\Lambda^{ii}$), the $\lambda$-th horizontal stripe of $V$ is zero. The
conditions $VU=C^{q}$ can be rewritten as
\begin{displaymath}
V_{ij}U^{st}= \begin{cases}
I &\text{ if }\ (i,j,s,t)=(q,1,q,q),\\
0 &\text{ otherwise}.
\end{cases}
\end{displaymath}
It implies that there are no zero vertical stripes in the new subdivision of $U^{q,q}$.
Moreover, if $\lambda\in\Lambda^{ii}$, $\mu\in\Lambda_k$, and the block $V^\lambda_\mu$ is
nonzero, then the $\mu$-th vertical stripe of $U$ is zero if $i\ne q$; if $i=q$ this
stripe contains exactly one non-zero block, namely, $U^\mu_\lambda=I$. We denote by
$\overline{\Lambda}^{ii}$ and $\overline{\Lambda}_k$ the set of those stripes from
$\Lambda^{ii}$ and $\Lambda_k$, which are not completely defines by these rules. Let
$\lambda\in\Lambda_5$, $\lambda'$ be the corresponding element of $\Lambda_1$. If the blocks
$U_\lambda^\mu$ and $U_{\lambda'}^{\mu'}$ are both nonzero, write $\mu\sim\mu'$. Note that
there is at most one element $\mu'$ such that it holds, and $\mu'\ne\mu$.
One can verify that the sets $\overline{\Lambda}^{ii}$ and $\overline{\Lambda}_k$ can
be linearly ordered so that, applying the transformations of the types \ref{ob1}--\ref{ob5}
from above, we can
replace a stripe $V^\lambda$ by $V^\lambda+V^{\lambda'} Z$ with $\lambda'<\lambda$ and a
stripe $V_\mu$ by $V_\mu+ZV_{\mu'}$, where $\lambda'<\lambda,\,\mu'<\mu$ for any matrix $Z$
(of the appropriate size). We can also replace $V^\lambda$ by $V^\lambda Z$, where $Z$ is
invertible, and replace simultaneously $V_\mu$ and $V_{\mu'}$, where $\mu'\sim\mu$, by
$ZV_\mu$ and $ZV_{\mu'}$ (if $\mu'$ does not exist, just replace $V_\mu$ by $ZV_\mu$) with
invertible $Z$. Therefore, we obtain a special sort of the matrix problems considered in
\cite{Bo}, which is known to be tame. Hence, the algebra $\mathtt{A}_n^{\{q,n\}}$ is tame
as well.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{taus}.]
Lemma~\ref{lh7} and Lemma~\ref{lh1} imply Theorem~\ref{taus}\eqref{taus.1}.
The statement of Theorem~\ref{taus}\eqref{taus.3} follows from
Theorem~\ref{taus}\eqref{taus.1} and Theorem~\ref{taus}\eqref{taus.2} using the
Tame and Wild Theorem.
Hence we have to prove Theorem~\ref{taus}\eqref{taus.2} only.
It is known, see for example \cite{DR}, that $\mathtt{A}_n$ has finite representation
type for $n\leq 3$, is tame for $n=4$, and is wild for all other $n$. This, in particular,
proves Theorem~\ref{taus}\eqref{taus.2} for $n\leq 4$.
If $n\geq 6$ then from Lemma~\ref{lh6} it follows that if $\mathtt{A}^X_n$ is tame then
$X\subset\{2,3,n-1,n\}$. From Theorem~\ref{taus}\eqref{taus.1} we know that
$X\not\subset \{2,n\}$. From Lemma~\ref{lh2} it follows that $\{3,n-1\}\not\subset X$
and $\{3,n\}\not\subset X$. From Lemma~\ref{lh3} it follows that $\{2,n-1\}\not\subset X$.
This leaves us the cases $X=\{n-1,n\}$, $\{n-1\}$, $\{2,3\}$ and $\{3\}$.
In the first two cases $\mathtt{A}^X_n$ is tame by Lemma~\ref{lh8}. The algebra
$\mathtt{A}^{\{2,3\}}_n$, $n\geq 3$, is given by the following quiver with relations:
\begin{displaymath}
\xymatrix{
1\ar@/^/[rr]^a && 2\ar@/^/[rr]^s\ar@/^/[ll]^b && 3\ar@/^/[ll]^t
}\quad\quad
\begin{array}{l}
ab=ts,\\
(st)^{n-2}=0,
\end{array}
\end{displaymath}
where $a=a_1$, $b=b_1$, $s=a_2$, $t=b_2$.
For $n\geq 5$ this algebra is tame as a quotient of the classical tame problem from
\cite{NR}. Hence $\mathtt{A}^{\{3\}}_n$ is tame as well.
For $n=5$ Lemma~\ref{lh4} implies that $\mathtt{A}^X_n$ is wild if
$X\supset\{3,4\}$, Lemma~\ref{lh2} implies that $\mathtt{A}^X_n$ is
wild if $X\supset\{3,5\}$, and Lemma~\ref{lh3} implies that $\mathtt{A}^X_n$ is
wild if $X\supset\{2,4\}$. Above we have already shown that the algebras
$\mathtt{A}^{\{2,3\}}_5$ is tame, and hence $\mathtt{A}^{\{3\}}_5$ is tame as well.
Finally, that the algebras $\mathtt{A}^{\{4,5\}}_5$ and $\mathtt{A}^{\{4\}}_5$ are tame
follows from Lemma~\ref{lh8}. This completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{tmain}}\label{s3}
We briefly recall the structure of $\HH$. We refer the reader to \cite{BG,So1,FKM,KM}
for details. By \cite[Theorem~5.9]{BG},
the category ${}^{\infty}_{\hspace{1mm}\lambda}\mathcal{H}_{0}^1$ is equivalent
to the block $\mathcal{O}_{\lambda}$ of the BGG category $\mathcal{O}$, \cite{BGG}.
Let $\mathtt{O}(\mathbf{W},\mathbf{G})$ denote the basic associative algebra,
whose module category is equivalent to $\mathcal{O}_{\lambda}$.
The simple modules in $\mathcal{O}_{\lambda}$ are in
natural bijection with the cosets $\mathbf{W}/\mathbf{G}$ (under this bijection the
coset $\mathbf{G}$ corresponds to the dominant highest weight). For
$w\in \mathbf{W}$ let $L(w)$ denote the corresponding simple module in
$\mathcal{O}_{\lambda}$, $P(w)$ be the projective cover of $L(w)$, $\Delta(w)$ be
the corresponding Verma module, and $I(w)$ be the injective envelope of $L(w)$.
Then \cite{So1} implies that for the longest element $w_0\in\mathbf{W}$ one has
$\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w_0))\cong\mathtt{C}(\mathbf{W},\mathbf{G})$
(recall that this is the subalgebra of $\mathbf{G}$-invariants in the coinvariant
algebra, associated to $\mathbf{W}$). The left multiplication in $\mathbf{W}$ induces
an action of $\mathbf{H}$ on the set $\mathbf{W}\cdot \lambda$. Let $P(\lambda,\mathbf{H})$
denote the direct sum of indecomposable projective modules that correspond to the longest
elements in all orbits of this action. The category $\HH$ is equivalent, by \cite{KM},
to the module category over $\mathtt{B}(\mathbf{G},\mathbf{H})=
\mathrm{End}_{\mathcal{O}_{\lambda}}(P(\lambda,\mathbf{H}))$. From \cite{So1} it follows
that $\mathtt{B}(\mathbf{G},\mathbf{H})$ depends on $\mathbf{G}$ rather than on $\lambda$.
We start with Theorem~\ref{tmain}\eqref{tm.1}, that is with the case of finite
representation type.
Note that $P(w_0)$ is always a direct summand of $P(\lambda,\mathbf{H})$. Hence
$\mathtt{C}(\mathbf{W},\mathbf{G})$ is a centralizer subalgebra of
$\mathtt{B}(\mathbf{G},\mathbf{H})$. In particular, for $\HH$ to be of finite
representation type, $\mathtt{C}(\mathbf{W},\mathbf{G})$ must be of finite representation
type as well. According to \cite[Theorem~7.2]{GP}, $\mathtt{C}(\mathbf{W},\mathbf{G})$ is
of finite representation type in the following cases:
\begin{enumerate}[(I)]
\item\label{s3.1} $\mathbf{W}=\mathbf{G}$;
\item\label{s3.2} $\mathbf{W}$ is of type $A_n$ and $\mathbf{G}$ is of type $A_{n-1}$;
\item\label{s3.3} $\mathbf{W}$ is of type $B_n$ and $\mathbf{G}$ is of type $B_{n-1}$;
\item\label{s3.4} $\mathbf{W}$ is of type $C_n$ and $\mathbf{G}$ is of type $C_{n-1}$;
\item\label{s3.5} $\mathbf{W}$ is of type $G_2$ and $\mathbf{G}$ is of type $A_1$.
\end{enumerate}
Moreover, in all these cases $\mathtt{C}(\mathbf{W},\mathbf{G})\cong\mathbb{C}[x]/(x^r)$,
where $r=[\mathbf{W}:\mathbf{G}]$. The last observation and \cite[Theorem~1]{FKM}
imply that in all the above cases the category $\mathcal{O}_{\lambda}$ is equivalent to
$\mathtt{A}_r\mathrm{-mod}$. In particular, the algebra
$\mathtt{B}(\mathbf{G},\mathbf{H})$ is isomorphic to $\mathtt{A}_r^X$ for appropriate $X$,
and, in the notation of Section~\ref{s2}, the algebra $\mathtt{C}(\mathbf{W},\mathbf{G})$
is the centralizer subalgebra, which corresponds to the vertex $1$.
The case \eqref{s3.1} gives Theorem~\ref{tmain}\eqref{tm.1.1}.
In the cases \eqref{s3.2}, \eqref{s3.3}, \eqref{s3.4}, and \eqref{s3.5} it follows from
Theorem~\ref{taus}\eqref{taus.1} that we have the following possibilities
for $\mathtt{B}(\mathbf{G},\mathbf{H})$:
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has one simple module.} This implies
$\mathbf{W}=\mathbf{H}$ and gives Theorem~\ref{tmain}\eqref{tm.1.2}.
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has two simple modules.}
These simples correspond either to the dominant and the anti-dominant weights
in $\mathcal{O}_{\lambda}$ or to the anti-dominant weight and its neighbor.
By a direct calculation we get the following: the case
$r=2$ gives Theorem~\ref{tmain}\eqref{tm.1.3}, and the case
$r>2$ gives Theorem~\ref{tmain}\eqref{tm.1.4}.
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has three simple modules.}
These simples correspond to the following weights in $\mathcal{O}_{\lambda}$:
the anti-dominant one, its neighbor, and the dominant one.
By a direct calculation we get the following: the case
$r=3$ gives Theorem~\ref{tmain}\eqref{tm.1.8}, and the case
$r>3$ gives Theorem~\ref{tmain}\eqref{tm.1.5}, Theorem~\ref{tmain}\eqref{tm.1.6},
and Theorem~\ref{tmain}\eqref{tm.1.7}. This proves Theorem~\ref{tmain}\eqref{tm.1}.
Let us now proceed with the tame case, that is with Theorem~\ref{tmain}\eqref{tm.2}.
If $\mathtt{C}(\mathbf{W},\mathbf{G})$ is of finite representation type, that is in the
cases \eqref{s3.1}--\eqref{s3.5}, Theorem~\ref{taus}\eqref{taus.2} give us
the following possibilities for $\mathtt{B}(\mathbf{G},\mathbf{H})$:
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has two simple modules.}
These simples correspond to the following weights in $\mathcal{O}_{\lambda}$:
either the anti-dominant one and the neighbor of its neighbor, or
the anti-dominant one and the neighbor of the dominant one.
By a direct calculation we get that these cases lead to
Theorem~\ref{tmain}\eqref{tm.2.2} and Theorem~\ref{tmain}\eqref{tm.2.3}.
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has three simple modules.}
These simples correspond to the following weights in $\mathcal{O}_{\lambda}$:
either the anti-dominant one, its neighbor, and the neighbor of its neighbor,
or the anti-dominant, its neighbor and the dominant one.
By a direct calculation we get that these cases lead to
Theorem~\ref{tmain}\eqref{tm.2.4} and Theorem~\ref{tmain}\eqref{tm.2.5}.
{\bf $\mathtt{B}(\mathbf{G},\mathbf{H})$ has four simple modules.} In this
case $r=4$ and a direct calculation gives Theorem~\ref{tmain}\eqref{tm.2.7}.
The rest (that is Theorem~\ref{tmain}\eqref{tm.2.1}) should correspond to the case when
$\mathtt{C}(\mathbf{W},\mathbf{G})$ is tame. According to \cite[Theorem~7.2]{GP},
$\mathtt{C}(\mathbf{W},\mathbf{G})$ is tame in the following cases:
\begin{enumerate}[(I)]
\setcounter{enumi}{5}
\item\label{s3.n1} $\mathbf{W}$ has rank $2$ and $\mathbf{G}=\{e\}$;
\item\label{s3.n2} $\mathbf{W}$ is of type $A_3$ and $\mathbf{G}$ is of type $A_{1}\times A_1$;
\item\label{s3.n3} $\mathbf{W}$ is of type $B_3$ and $\mathbf{G}$ is of type $A_2$;
\item\label{s3.n4} $\mathbf{W}$ is of type $C_3$ and $\mathbf{G}$ is of type $A_{2}$;
\item\label{s3.n5} $\mathbf{W}$ is of type $D_n$ and $\mathbf{G}$ is of type $D_{n-1}$.
\end{enumerate}
For $\mathbf{W}=\mathbf{H}$ the cases \eqref{s3.n1}, \eqref{s3.n2}, \eqref{s3.n3},
\eqref{s3.n4}, and \eqref{s3.n5} give exactly Theorem~\ref{tmain}\eqref{tm.2.1}. Let
us now show that the rest is wild.
If $\mathbf{W}\neq\mathbf{H}$ then $\HH$ has at least two non-isomorphic indecomposable
projective modules, one of which is $P(w_0)$ and the other one is some $P(w)$.
We first consider the cases \eqref{s3.n2}, \eqref{s3.n3},
\eqref{s3.n4}, and \eqref{s3.n5}. In all these cases the restriction of the Bruhat order
to $\mathbf{W}/\mathbf{G}$ gives the following poset:
\begin{equation}\label{eqeqptm}
\xymatrix{
& & & & u_1\ar@{-}[rd]& & &&\\
w_0 \ar@{-}[r]& w_1\ar@{-}[r] & \dots\ar@{-}[r] & w_s\ar@{-}[ru]\ar@{-}[rd] & &
v_s\ar@{-}[r] & \dots\ar@{-}[r]& v_1\ar@{-}[r] & v_0 \\
& & & & u_2\ar@{-}[ru]& & &&\\
}
\end{equation}
From \cite[Theorem~7.3]{GP} it follows that in all these cases
the algebra $\mathtt{C}(\mathbf{W},\mathbf{G})$ has two generators.
We consider the centralizer subalgebra
$\mathtt{D}(w)=\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w_0)\oplus P(w))$ and
let $Q(w)$ denote the quotient of $\mathtt{D}(w)$ modulo the square of the
radical. Recall that the algebra $\mathtt{O}(\mathbf{W},\mathbf{G})$ is Koszul,
see \cite{BGS}, and hence the category $\mathcal{O}_{\lambda}$ is positively
(Koszul) graded, see also \cite{St}. Hence $\mathtt{D}(w)$ is positively graded as well.
We are going to show that $\mathtt{D}(w)$ is always wild. We start with the
following statement.
\begin{lemma}\label{lnewmul}
Let $w\in\{w_1,\dots,w_s,u_1,u_2,v_0,\dots,v_s\}$. Then
\begin{displaymath}
[P(v_0):L(w)]=
\begin{cases}
1, & w\in \{u_1,u_2,v_0,\dots,v_s,w_0\};\\
2, & w\in \{w_1,w_3,\dots,w_s\},
\end{cases}
\end{displaymath}
where $[P(v_0):L(w)]$ denotes the composition multiplicity.
\end{lemma}
\begin{proof}
By \cite{BGS} the category $\mathcal{O}_{\lambda}$ is Koszul dual to the regular
block of the corresponding parabolic category of Rocha-Caridi, see \cite{RC}. Hence
the multiplicity question for $\mathcal{O}_{\lambda}$ reduces, via the Koszul duality,
to the computation of the extensions in the parabolic case. The latter are given by
Kazhdan-Lusztig polynomials and for the algebras of type \eqref{s3.n2}, \eqref{s3.n3},
\eqref{s3.n4}, and \eqref{s3.n5} these multiplicities are computed in \cite[\S~14]{ES}.
The statement of our lemma follows directly from \cite[\S~14]{ES}.
\end{proof}
Since $L(w_0)$ is a simple Verma module, it occurs exactly one time in the composition
series of $\Delta(w)$, which gives rise to a morphism, $\alpha:P(w_0)\to P(w)$. This
morphism has the minimal possible degree (with respect to our positive grading) and hence
does not belong to the square of the radical. Further, the unique (now by the BGG
reciprocity) occurrence of $\Delta(w)$ in the Verma flag of $P(w_0)$ gives a morphism,
$\beta:P(w)\to P(w_0)$, which does not belong to the square of the radical either since it
again has the minimal possible degree. Now we will have to consider several cases.
{\bf Case~A.} Assume first that $w\in\{v_0,v_1,\dots,v_s\}$. The quiver of $Q(w)$ contains
the arrows, corresponding to $\alpha$ and $\beta$. Moreover $Q(w)$ also contains two loops
at the point $w_0$ which correspond to the generators of $\mathtt{C}(\mathbf{W},\mathbf{G})$.
Passing, if necessary, to a quotient of $Q(w)$, we obtain the following configuration:
\begin{equation}\label{wconf2}
\xymatrix{
w_0\ar@/^/@{-}[d]\ar@/_/@{-}[d]\ar@{-}[rrd] && w\ar@{-}[lld]\\
w_0 && w
}.
\end{equation}
Since the underlined diagram is not an extended Dynkin diagram, the configuration
is wild, see \cite{DF,DR0}. This implies that $\mathtt{D}(w)$ and hence $\HH$ is wild in this case.
{\bf Case~B.} Consider now the case $w=u_1$ (the case $w=u_2$ is analogous).
Lemma~\ref{lnewmul} implies that in this case the multiplicity of
$L(w)$ in $\Delta(v_0)$ is $1$. Hence from \cite[Proposition~2.12]{Ba} it follows
that $P(w)$ has simple socle $L(w_0)$, in particular, $P(w)$ is a submodule of
$P(w_0)=I(w_0)$. Injectivity of $P(w_0)$ thus gives a surjection from
$\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w_0))\cong
\mathtt{C}(\mathbf{W},\mathbf{G})$ to $\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w))$.
Note that, by \cite{So1}, $\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w_0))$ is the center
of $\mathtt{O}(\mathbf{W},\mathbf{G})$ and hence is central in
$\mathtt{B}(\mathbf{G},\mathbf{H})$. We still have the elements $\alpha$ and $\beta$ as above,
which do not belong to the square of the radical. Further, using the embedding
$P(w)\hookrightarrow P(w_0)$ one also obtains that $\alpha$ generates
$\mathrm{Hom}_{\mathcal{O}_{\lambda}}(P(w_0),P(w))$ as a
$\mathtt{C}(\mathbf{W},\mathbf{G})$-module and $\beta$ generates
$\mathrm{Hom}_{\mathcal{O}_{\lambda}}(P(w),P(w_0))$ as a
$\mathtt{C}(\mathbf{W},\mathbf{G})$-module.
With this notation, $\mathtt{D}(w)$ has the following quiver:
\begin{displaymath}
\xymatrix{
w_0\ar@/^/[rr]^{\alpha}\ar@(lu,ld)[]_{x} && w\ar@/^/[ll]^{\beta}\ar@(ru,rd)[]^{y}
}.
\end{displaymath}
Note that $\alpha$ is surjective as a homomorphism from
$\mathrm{End}_{\mathtt{D}(w)}(P(w_0))$ to $\mathrm{End}_{\mathtt{D}(w)}(P(w))$
since $P(w)$ has simple socle.
This and the fact that $\mathrm{End}_{\mathcal{O}_{\lambda}}(P(w_0))$ is central
implies the relations $\alpha x=y\alpha$ and $\beta y=x\beta$. Using
\cite[7.12-7.16]{GP} one also easily gets the following additional relations:
$y^{s+2}=0$, $\alpha\beta=c y^{s+1}$ for some $0\neq c\in\mathbb{C}$, $x\beta\alpha=
\beta\alpha x=0$ and $(\beta\alpha)^2=x^{2s+3}$. This implies that the universal
covering of $\mathtt{D}(w)$ has the following fragment (shown for $s=1$):
\begin{equation}\label{eqeqnlm}
\xymatrix{
{\bf w_0}\ar[d]_{x}\ar[rr]^{\alpha}\ar@{--}[drr] &&
w\ar[d]_{y}\ar[ddll]|->>>>>>{\beta} \\
w_0\ar[rr]|->>>>{\alpha}\ar[d]_{x} && w \\
w_0
}
\end{equation}
(here the dashed arrow indicates the commutativity of the corresponding square).
Evaluating the Tits form of this fragment at the point $(1,2,2,2,2)$, where
$1$ is placed in the bold vertex, we obtain $-1<0$ implying that the fragment
\eqref{eqeqnlm} is wild (see for example \cite{CB,Dr}). Hence $\mathtt{D}(w)$
is wild as well.
{\bf Case~C.} Assume now that $w=w_i$, $i=2,\dots,s-1$. Hence, by Lemma~\ref{lnewmul}
the multiplicity of $L(w)$ in $P(v_0)$ is $2$. We will need the following lemma:
\begin{lemma}\label{hlem}
Let $A$ be a basic associative algebra, let $e$ be an idempotent of $A$ and
$f$ be a primitive direct summand of $e$. Assume that there exist two non-isomorphic
$A$-modules $M$ and $N$ satisfying the following properties:
\begin{enumerate}[(1)]
\item\label{hlem.1} both $M$ and $N$ have simple top and simple socles isomorphic to the
simple $A$-module $L^{A}(f)$, corresponding to $f$;
\item \label{hlem.2} $e\,\mathrm{rad}(M)/\mathrm{soc}(M)=e\,\mathrm{rad}(N)/\mathrm{soc}(N)=0$.
\end{enumerate}
Then $\dim \mathrm{Ext}_{eAe}^1(L^{eAe}(f),L^{eAe}(f))>1$.
\end{lemma}
\begin{proof}
Recall from \cite[Chapter~5]{Au} that $eAe\mathrm{-mod}$ is equivalent to the full
subcategory $\mathcal{M}$ of $A\mathrm{-mod}$, consisting of all $Ae$ approximations of
modules from $A\mathrm{-mod}$. Let $M'$ and $N'$ be the $Ae$-approximations of $M$ and $N$
respectively. Both $M'$ and $N'$ are indecomposable since $M$ and $N$ are indecomposable
by \eqref{hlem.1}. Then the $eAe$-modules $eM'$ and $eN'$ are indecomposable as well, and,
because of \eqref{hlem.1} and \eqref{hlem.2}, both $eM'$ and $eN'$ have length two with
both composition subquotients isomorphic to the simple $eAe$-module $L^{eAe}(f)$.
Assume that $eM'\cong eN'$. Then, by \cite[Chapter~5]{Au}, any $eAe$-isomorphism between
$eM'$ and $eN'$ induces an $A$-isomorphism between $M'$ and $N'$. From \eqref{hlem.1} we
also have that the canonical maps $N\to N'$ and $M\to M'$ are injective, that is we have
\begin{displaymath}
N\hookrightarrow N'\cong M'\hookleftarrow M.
\end{displaymath}
From \eqref{hlem.1}, the definition of the $Ae$-approximation, and the fact that
$f$ is a direct summand of $e$, it follows that the image of $N$ in $N'$ coincides with the
trace of the projective module $Af$ in $N'$. Analogously the image of $M$ in $M'$ coincides
with the trace of the projective module $Af$ in $M'$. This implies $M\cong N$, a
contradiction. The statement follows.
\end{proof}
Since we are not in the multiplicity-free case, from the Kazhdan-Lusztig Theorem
it follows that the quiver of $\mathtt{O}(\mathbf{W},\mathbf{G})$ contains more arrows
than what is indicated on the diagram \eqref{eqeqptm}. Namely, from the results of
\cite[\S~14]{ES} we have $\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(v_{i-1}))\neq 0$.
Note that $\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(w_{i+1}))\neq 0$
also follows from the Kazhdan-Lusztig Theorem since $w_i$ and $w_{i+1}$ are neighbors
(it follows from \cite[\S~14]{ES} as well).
Let now $u\in\{v_{i-1},w_{i+1}\}$. Then we can fix a non-zero element from
$\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(u))$. This means that $L(u)$ occurs in
degree $1$ in the projective module $P(w)$. The module $P(w)$ has a Verma flag, and the above
occurrence of $L(u)$ gives rise to an occurrence of $\Delta(u)$ as a subquotient of $P(w)$.
Since $L(u)$ is in degree $1$ and $\mathcal{O}_{\lambda}$ is positively graded, we can
factor all the Verma subquotients of $P(w)$ except $\Delta(w)$ and $\Delta(u)$ out
obtaining a non-split extension, $N(u)$ say, of $\Delta(u)$ by $\Delta(w)$. By duality,
we have $\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(u),L(w))\neq 0$ as well, and, as $w<u$,
the module $L(w)$ occurs in degree $2$ in the module $N(u)$. This occurrence gives rise to a
map from $N$ to the injective module $I(w)$. Let $N'(u)$ denote the image of this map. By
construction, the module $N'(u)$ is an indecomposable module of Loewy length $3$ with
simple top and simple socle isomorphic to $L(w)$. Moreover,
$\mathrm{Rad}(N'(u))/\mathrm{Soc}(N'(u))$ (the latter is considered as an object
of $\mathcal{O}_{\lambda}$) does not contain $L(w)$ as a subquotient because of the
quasi-hereditary vanishing $\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(w))=0$. Since
$w\neq w_0,w_1$, all occurrences of $L(w_0)$ in $P(w)$ are in degrees $\geq 2$. Hence
$\mathrm{Rad}(N'(u))/\mathrm{Soc}(N'(u))$ does not contain $L(w_0)$ as a
subquotient either. Finally, we observe that $\mathrm{Rad}(N'(v_{i-1}))/\mathrm{Soc}(N'(v_{i-1}))$
contains $L(v_{i-1})$ as a subquotient while $\mathrm{Rad}(N'(w_{i+1}))/\mathrm{Soc}(N'(w_{i+1}))$
does not contain $L(v_{i-1})$ as a subquotient. This implies that $N'(v_{i-1})\not\cong N'(w_{i+1})$.
Hence, applying Lemma~\ref{hlem}, we obtain that the quiver of $Q(w)$ contains at
least two loops at the point $w$. This quiver also contains the elements $\alpha$
and $\beta$ described above. Factoring, if necessary, the extra arrows out, $Q(w)$
thus gives rise to the following configuration:
\begin{equation}\label{wconf1}
\xymatrix{
w_0\ar@{-}[rrd] && w\ar@{-}[lld]\ar@/^/@{-}[d]\ar@/_/@{-}[d]\\
w_0 && w
}.
\end{equation}
Since this is not an extended Dynkin quiver, this configuration is wild, see \cite{DF,DR0}.
Hence $\mathtt{D}(w)$, and thus $\HH$ is wild in this case.
{\bf Case~D.} Let $w=w_s$. In this case from \cite[\S~14]{ES} we have
$\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(v_{s-1}))\neq 0$. We also have
$\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(u_i))\neq 0$, $i=1,2$, since
$w_s$ and $u_i$ are neighbors. Hence the module $P(w)$ contains exactly $3$
copies of $L(w)$ in degree $2$: each lying in the top of the radical of some
of the Verma modules $\Delta(x)$, $x=u_1,u_2,v_{s-1}$, occurring in degree $1$
in the Verma filtration of $P(w)$. Note that $L(w)$ does not occur in degree
$1$ (see Case~C). Further, $L(w_0)$ occurs at most one time in degree $1$
(this happens if $s=1$, in which case the occurrence in degree $1$ corresponds
to the socle of $\Delta(w)$). In any case, since we have $3$ occurrences of $L(w)$
in degree $2$, at most one occurrence of $L(w_0)$ in degree $1$, and since
$\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(w_0))\cong \mathbb{C}$ in the
case $s=1$, mapping the degree $2$-occurrences to $I(w)$ we obtain at least
two non-isomorphic modules, $N_1$ and $N_2$, which have simple top and socle
isomorphic to $L(w)$ and no other occurrences of $L(w)$ and $L(w_0)$. Taking
into account $\alpha$ and $\beta$, from Lemma~\ref{hlem} it now follows that
some quotient of $Q(w)$ gives rise to the wild configuration \eqref{wconf1}.
Hence $\mathtt{D}(w)$, and thus $\HH$ is wild in this case as well.
{\bf Case~E.} Finally, let $w=w_1$ and $s>1$. In this case both $\alpha$
and $\beta$ have degree $1$. From \cite[\S~14]{ES} we have
$\mathrm{Ext}_{\mathcal{O}_{\lambda}}^1(L(w),L(v_{0}))\neq 0$, which gives
us $2$ occurrences of $L(w)$ in degree $2$ of the module $P(w)$. One of them
comes from the subquotient $\Delta(v_0)$ in the Verma flag of $P(w)$. But
$v_0$ is dominant, and hence $\Delta(v_0)$ is in fact a submodule. Denote
by $\gamma$ the endomorphism of $P(w)$ of degree $2$, which corresponds to
this occurrence of $L(w)$ in $\Delta(v_0)$. Since
$(\beta\alpha)^2\neq 0$ by \cite[7.12-7.16]{GP}, it follows that
the image of $\alpha\beta$ contains some $L(w_0)$ in degree $3$. However,
$\Delta(v_0)$ does not contain any $L(w_0)$ in degree $2$ (note that
$\Delta(v_0)$ itself starts in degree $1$ in $P(w)$). Hence $\alpha\beta$ and $\gamma$
are linearly independent and thus $\gamma$ does not belong to the square of the radical.
Now we claim that $\gamma^2=\gamma\alpha\beta=\alpha\beta\gamma=0$. The first and the
second equalities, that is $\gamma^2=\gamma\alpha\beta=0$, follow from the easy
observation that $\Delta(v_0)$ does not have any $L(w)$ in degree $3=1+2$.
The last one, that is $\alpha\beta\gamma=0$, follows from the fact that the
degree $1$-copy of $\Delta(v_0)$ belongs to the kernel of $\beta$ since
$P(w_0)$ does not have any $L(v_0)$ in degree $2$. Now, $P(w)$ has two copies of
$L(w)$ in the degree $2s$ which correspond to the subquotients $\Delta(u_1)$
and $\Delta(u_2)$ in the Verma flag of $P(w)$. Hence there should exist an
endomorphism of $P(w)$ of degree $2s$, which is linearly independent with $\alpha\beta$.
Since $\gamma^2=\gamma\alpha\beta=\alpha\beta\gamma=0$, it follows that
this new endomorphism does not belong to the square of the radical of $Q(w)$.
Taking into account $\alpha$ and $\beta$, from Lemma~\ref{hlem} it now follows
that some quotient of $Q(w)$ gives rise to the wild configuration \eqref{wconf1}.
Hence $\mathtt{D}(w)$, and thus $\HH$ is wild in this case as well.
This completes the cases \eqref{s3.n2}, \eqref{s3.n3}, \eqref{s3.n4}, \eqref{s3.n5}.
Finally, let us consider the case \eqref{s3.n1}. Let $t_1$ and $t_2$ be the simple reflections
in $\mathbf{W}$, and let $\theta_{t_1}$, $\theta_{t_2}$ be translation functors through the
$t_1$ and $t_2$-wall respectively. If $\mathbf{H}\neq \mathbf{W}$,
then $\HH$ necessarily contains an indecomposable projective module,
which corresponds to some $w$ such that $\mathfrak{l}(w_0)-\mathfrak{l}(w)=2$.
The modules $\theta_{t_1}L(w_0)$ and $\theta_{t_2}L(w_0)$ are indecomposable and
have the following Loewy filtrations:
\begin{displaymath}
\begin{array}{lc}
& L(w_0)\\
\theta_{t_1}L(w_0):&L({t_1}'w_0)\\
& L(w_0)\\
\end{array},
\quad\quad
\begin{array}{lc}
& L(w_0)\\
\theta_{t_2}L(w_0):&L({t_2}'w_0)\\
& L(w_0)\\
\end{array},
\end{displaymath}
for some ${t_1}',{t_2}'$ such that $\{t_1,t_2\}=\{t_1',t_2'\}$ (the exact values of
$t_1'$ and $t_2'$ depend on the type of $\mathbf{W}$).
In particular, $\theta_{t_1}L(w_0)\not\cong\theta_{t_2}L(w_0)$, both have simple top and simple
socle isomorphic to $L(w_0)$, and both do not contain any subquotient isomorphic to
$L(w)$ since $\mathfrak{l}(w_0)-\mathfrak{l}(w)=2$.
Hence from Lemma~\ref{hlem} it follows that the quotient of the corresponding $\mathtt{D}(w)$
modulo the square of the radical gives rise to the wild configuration \eqref{wconf2}.
Hence $\mathtt{D}(w)$ is wild in this case. This proves Theorem~\ref{tmain}\eqref{tm.2}.
To complete the proof we just note that Theorem~\ref{tmain}\eqref{tm.3} follows from
Theorem~\ref{tmain}\eqref{tm.1} and Theorem~\ref{tmain}\eqref{tm.2} using the Tame
and Wild Theorem.
\section{The case of a semi-simple algebra $\mathfrak{g}$}\label{s4}
Theorem~\ref{tmain} is formulated for a simple algebra $\mathfrak{g}$. However, in the
case of a semi-simple algebra the result is almost the same. In a standard way it
reduces to the description of the representation types of the tensor products of
algebras, described in Theorem~\ref{tmain}.
\begin{theorem}\label{tss}
Let $k>1$ be a positive integer, and $\mathtt{X}_i$, $i=1,\dots,k$, be basic algebras
associated to non-semi-simple categories from the list of Theorem~\ref{tmain}. Then
the algebra $\mathtt{X}_1\otimes\dots\otimes\mathtt{X}_k$ is never of finite representation
type, and it is of tame representation type only in the following two cases:
\begin{enumerate}[(1)]
\item\label{tss.1} $k=2$ and both $\mathtt{X}_1$ and $\mathtt{X}_2$ have Coxeter type
$(A_1,e,A_1)$;
\item\label{tss.2} $k=2$, one of $\mathtt{X}_1$ and $\mathtt{X}_2$ has Coxeter type
$(A_1,e,A_1)$, and the other one has Coxeter type $(A_1,e,e)$.
\end{enumerate}
\end{theorem}
\begin{proof}
The algebra in \eqref{tss.1} is isomorphic to $\mathbb{C}[x,y]/(x^2,y^2)$ and hence
is tame with well-known representations. Let us thus consider the algebra $\mathtt{X}$
of the case \eqref{tss.2}. This algebra is given by the following quiver with relations
\begin{equation}\label{eqbre}
\xymatrix{ 1 \ar@(ul,dl)[]_x \ar@/^/[rr]^u && 2 \ar@/^/[ll]^v \ar@(ur,dr)[]^y }\qquad x^2=y^2=uv=0,\ ux=yu,\ xv=vy.
\end{equation}
\begin{lemma}\label{lbre}
The algebra of \eqref{eqbre} is tame.
\end{lemma}
\begin{proof}
This algebra is tame by \cite{Be}, however, since the last paper is not easily available
and does not contain a complete argument, we prove the tameness of $\mathtt{X}$. Consider
the subalgebra $\mathtt{X}'\subset \mathtt{X}$ generated by $x,y,u$. Its indecomposable
representations are
\begin{align}\label{xyu}
\xymatrix@R=2ex{ {e_8} \ar[d] \\ {e_1} } \hskip1.5cm
\xymatrix@R=2ex{ {e_9}\ar[d]\ar[dr] & {f_{10}}\ar[d] \\ {e_2} &{f_3} }\hskip1.5cm
\xymatrix@R=2ex{ {e_{10}}\ar[d]\ar[dr] & \\ {e_3}&{f_6} } \hskip1.5cm
\xymatrix@R=2ex{ {e_{11}}\ar[d]\ar[r] &{f_8}\ar[d] \\ {e_4}\ar[r] &{f_1} } \notag
\\
\\
\xymatrix@R=2ex{ {e_5} \\ {} } \hskip1.5cm
\xymatrix@R=2ex{ {e_6} \ar[dr] & {f_9}\ar[d] \\ & {f_2} } \hskip1.5cm
\xymatrix@R=2ex{ {e_7}\ar[r] & {f_5} } \hskip1.5cm
\xymatrix@R=2ex{ {f_{11}}\ar[d] \\ {f_4} } \hskip1.5cm
\xymatrix@R=2ex{ {f_7} \\{} } \notag
\end{align}
Here the elements $e_i$ form a basis of the space corresponding to the vertex $1$, the
elements $f_j$ form a basis of the space corresponding to the vertex $2$, the vertical
arrows show the action of $x$ and $y$, and the arrows going from left to right
show the action of $u$. Let $M$ be an $\mathtt{X}$-module. Decompose it as $\mathtt{X}'$-module.
Then the matrix $V$ describing the action of $v$ divides into the blocks $V_{ij},\
i,j=1,2,\dots,11$, corresponding to the basic elements $e_i$ and $f_j$ from above.
Moreover, since $uv=0$, the blocks $V_{ij}$ can only be nonzero if $i\in\{1,2,3,5,8\}$;
since $xv=vy$, $V_{ij}=0$ if $i>4,j<5$ or $i>7,j<8$, and $V_{ij}=V_{i+7,j+7}$ for
$i,j\in\{1,2,3,4\}$. If $M'$ is another $\mathtt{X}$-module, $V'=(V'_{ij})$ is the corresponding
block matrix, a homomorphism $M\to M'$ is given by a pair of matrices $S,T$, where
$S:M(1)\to M(1),\,T:M(2)\to M(2)$. Divide them into blocks corresponding to the
division of $V$: $S=(S_{ij}),\,T=(T_{ij}),\ i,j=1,2,\dots,11$. One can easily check that
such block matrices define a homomorphism $M\to M'$ if and only if
the following conditions hold:
\begin{itemize}
\item $S$ and $T$ are block triangular, i.e. $S_{ij}=0$ and $T_{ij}=0$ if $i>j$.
\item $S_{ij}=S_{i+7,j+7}$ and $T_{ij}=T_{i+7,j+7}$ for $i,j\in\{1,2,3,4\}$.
\item $S_{ii}=T_{jj}$ if in the list \eqref{xyu} there is an arrow $e_i\to f_j$.
\item $S_{ij}=T_{kl}$ if in the list \eqref{xyu} there are arrows $e_i\to f_k$ and $e_j\to f_l$.
\item $S_{ij}=0$ if $(i,j)\in\{(4,5),(4,6),(6,8),(7,8),(7,9)\}$.
\item $T_{ij}=0$ if $(i,j)\in\{(3,5),(4,5), (4,6),(7,8)\}$.
\end{itemize}
Certainly, $S,T$ define an isomorphism if and only if all diagonal blocks are invertible.
In particular, we can replace the part $V_1=(V_{11}\ V_{12}\ V_{13}\ V_{14})$ by
$S_1^{-1}V_1T_1$, where $S_1$ is any invertible matrix and $T_1=(T_{ij}),\ i,j\in\{1,2,3,4\}$
is any invertible block triangular matrix. So we can suppose that $V_1$ is of the form
\begin{displaymath}
\left( \begin{array}{cc|cc|cc|cc}
0&I^{(1)}&0&0&0&0&0&0\\
0&0&0&I^{(2)}&0&0&0&0\\
0&0&0&0&0&I^{(3)}&0&0\\
0&0&0&0&0&0&0&I^{(4)}\\
0&0&0&0&0&0&0&0
\end{array} \right),
\end{displaymath}
where the vertical lines show the division of $V_1$ into blocks, $I^{(k)}$ denote identity
matrices (of arbitrary sizes). Denote the parts of the blocks $V_{1j}$ to the right of
$I^{(k)}$ by $V_{1k,j}$ and those to the right of the zero part of $V_1$ by $V_{5j}$.
Using automorphisms, we can make zero all $V_{11,j}$ and $V_{12,j}$, as well as the
blocks $V_{13,j}$ and $V_{14,j}$ for $j>6$. Note that $V_{1j}=V_{8,j+7}$,
and we can also make zero all parts of the blocks $V_{1,j+7}$ over the parts $I^{(j)}$
of the blocks $V_{8,j+7}$. Subdivide the blocks of $S$ and
$T$ corresponding to this subdivision of $V_1$. Note that, since $S_{22}=T_{99}=T_{33}$,
we must also subdivide the blocks $S_{2j}$ into $S_{20,j}$ and $S_{21,j}$
respective to the zero and nonzero parts of $V_{13}$. Then the extra conditions for the
new blocks are:
\begin{displaymath}
S_{21,20}=0 \quad\text{and}\quad S_{1k,1l}=0\quad \text{if}\quad k>l.
\end{displaymath}
Therefore, we get a matrix problem considered in \cite{Bo}. It is described by the
semichain
\begin{displaymath}
\xymatrix@R=1ex{ f_5\ar[r] & f_6\ar[r]\ar[dr] & f_7\ar[dr] \\
&& f_8 \ar[r] & f_9\ar[r] & f_{10} \ar[r]& f_{11} }
\end{displaymath}
for the columns, the chain
\begin{displaymath}
e_5 \to e_3 \to e_{21} \to e_{20} \to e_{15} \to e_{14} \to e_{13}
\end{displaymath}
for the rows, and the unique equivalence $e_3\sim f_6$. This matrix problem is tame, hence,
the algebra $\mathtt{X}$ is tame as well.
\end{proof}
If $k>2$ then each of $\mathtt{X}_1$, $\mathtt{X}_2$, and $\mathtt{X}_3$ has
at least one projective module with non-trivial endomorphism ring and thus
$\mathtt{X}_1\otimes \mathtt{X}_2\otimes \mathtt{X}_3$ contains a centralizer subalgebra,
which surjects onto $\mathbb{C}[x,y,z]/(x,y,z)^2$. The later algebra is wild
by \cite{Dr2} and hence $\mathtt{X}$ is wild.
If $k=2$ but none of the conditions \eqref{tss.1}, \eqref{tss.2} is satisfied,
then one of the algebras $\mathtt{X}_1$ and $\mathtt{X}_2$ has a projective
module, whose endomorphism algebra surjects onto $\mathbb{C}[x]/(x^3)$, and the other
one has a projective module, whose endomorphism algebra surjects onto $\mathbb{C}[y]/(y^3)$.
Hence there is a centralizer subalgebra in $\mathtt{X}$, which surjects onto
$\mathbb{C}[x,y]/(x^3,y^2)$, the later being wild by \cite{Dr2}. This shows that
$\mathtt{X}_1\otimes \mathtt{X}_2$ is wild as well and completes the proof.
\end{proof}
\vspace{1cm}
\begin{center}
{\bf Acknowledgments}
\end{center}
The research was done during the visit of the first author to Uppsala
University, which was partially supported by the Faculty of Natural Science,
Uppsala University, the Royal Swedish Academy of Sciences, and The Swedish
Foundation for International Cooperation in Research and Higher Education
(STINT). These supports and the hospitality of Uppsala University are gratefully
acknowledged. The second author was also partially supported by the Swedish
Research Council. We would like to thank Catharina Stroppel for her comments
on the paper. We are especially in debt to the referee for a very careful
reading of the manuscript and for many useful comments, suggestions, and
corrections. | {"config": "arxiv", "file": "math0412301.tex"} |
TITLE: Set theory: Lexicographic ordering and chain proof
QUESTION [0 upvotes]: Let $A$ and $B$ be partially ordered sets, and let $C$ and $D$ be chains of $A$ and $B$, respectively. If $A \times B$ is ordered lexicographically, prove that $C \times D$ is a chain of $A \times B$.
My attempt
for all $c_1,c_2 \in C, c_1 \le c_2$ or $c_2 \le c_1$
for all $d_1,d_2 \in D, d_1 \le d_2$ or $d_2 \le d_1$
we have to show that $C \times D$ is a chain of $A \times B$.
$\implies \forall (c_1,d_1),(c_2,d_2) \in C \times D$,
$(c_1,d_1) \le (c_2,d_2)$ or $(c_2,d_2) \le (c_1,d_1)$
and I think I need to use the fact that $A \times B$ is ordered lexicographically but am having trouble making the connection.
Any help would be appreciated.
I have made some addition but am not sure if this proves the question.
Since every $c_1,c_2 \in A$ and every $d_1,d_2 \in B$, and $A \times B$ is ordered lexicographically, we have that
$c_1 \lt c_2$ or
$c_1=c_2$ and $d_1 \le d_2$
$\iff (c_1,d_1) \le (c_2,d_2)$
Hence it follows that $C \times D$ is a chain of $A \times B$.
REPLY [2 votes]: If $(c_1,d_1), (c_2,d_2) \in C \times D \subseteq A \times B$, then $c_1,c_2 \in C$, which is a chain and $d_1,d_2 \in D$, which is a chain.
If $c_1 < c_2$, then $(c_1,d_1) < (c_2,d_2)$
If $c_1 > c_2$, then $(c_1,d_1) > (c_2,d_2)$
If $c_1 = c_2$, then do the same with $d_1$ and $d_2$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2241817} |
TITLE: Will these frogs ever meet?
QUESTION [8 upvotes]: Two frogs are on an eternal stairway. Will they ever be able to meet?
Anton is on step 14 and jumps 4 steps.
Billy is on step 16 and jumps 6 steps.
The way I look at this is that as long as they're not on the same step, the one on the lowest jumps next, so:
Their difference (Anton-Billy) is -2
Anton jumps to step 18, difference = 2
Billy jumps to step 22, difference = -4
Anton jumps to step 22 - THEY MEET
If Anton had been on step 15 instead:
Their difference is -1
Anton jumps to step 19, difference is 3
Billy jumps to step 22, difference is -3
Anton jumps to step 23, difference is -1
Since the difference has repeated, they will never meet
However, I am certain there's a much better way to find out if and where they will ever meet in a single equation.
REPLY [1 votes]: jump to ten. This answer would have been more concise but for the character count. | {"set_name": "stack_exchange", "score": 8, "question_id": 1498587} |
TITLE: Checking a fundamental unit of a real quadratic field
QUESTION [4 upvotes]: I just want to check whether I have got the fundamental unit of a certain real quadratic field, but I can't find how.
For instance, if I am working in $\mathbb{Q}(\sqrt{2})$ then $\mathcal{O}_K=\mathbb{Z}[\sqrt{2}]$ and so the fundamental unit is of the form $a+b\sqrt{2}$. I suppose the fundamental unit is $1+\sqrt{2}$, ie. $a=1, b=1$. I know this satisfies Pell's Equation:
$$a^2-db^2=1^2-(2)(1^2)=1-2=-1$$
and so is a unit. How would I show this is fundamental?
Since the fundamental unit generates all the other units, I first considered trying to show that $(1+\sqrt{2})^n$ for $n \in \mathbb{N}$ must generate units, but this got messy. Surely there is a simpler way to check if this unit is fundamental? Is it simply that these are the smallest values of $a,b$ to create a unit?
REPLY [3 votes]: Here’s the story, though I leave it to others to furnish a proof or give a proper reference.
For $d\ge2$ and not a square, you get all solutions to the Pell equation $m^2-dn^2=\pm1$ by looking at the continued fraction expansion of $\sqrt d$. If $k=\lfloor\sqrt d\rfloor$, then your expansion looks like this:
$$
\sqrt d=k+\bigl[\frac1{\delta_1+}\>\frac1{\delta_2+}\cdots\frac1{\delta_{r-2}+}\>\frac1{\delta_{r-1}+}\>\frac1{2k+}\>\bigr]\,,
$$
where the part in brackets repeats infinitely. Then, every time you evaluate a convergent to the continued fraction just before the appearance of the $2k$, you’ll get a solution of Pell from the numerator $m$ and the denominator $n$. For instance, $\sqrt7=2+\frac1{1+}\,\frac1{1+}\,\frac1{1+}\,\frac1{4+}\,\cdots$, repeating with a period of length four. You evaluate $2+\frac1{1+}\,\frac1{1+}\,\frac1{1}=8/3$, and lo and behold, the first solution of $m^2-7n^2=\pm1$ is $m=8$, $n=3$.
For $d$ squarefree and incongruent to $1$ modulo $4$, this gives you all the units in $\Bbb Q(\sqrt{d}\,)$, but for $d\equiv1\pmod4$, it may happen that what you get is the cube of a unit. But as I recall, in that case, the primitive unit’s coordinates always will show up as the numerator and denominator of an earlier convergent. | {"set_name": "stack_exchange", "score": 4, "question_id": 1220602} |
\begin{document}
\title[\it Spin-Boson model]{Exact solution of the Schr\"{o}dinger equation with the spin-boson Hamiltonian}
\author{Bart{\l}omiej Gardas}
\address{Institute of Physics, University of Silesia, PL-40-007 Katowice, Poland}
\ead{[email protected]}
\begin{abstract}
We address the problem of obtaining the exact reduced dynamics of the spin-half (qubit)
immersed within the bosonic bath (enviroment). An exact solution of the Schr\"{o}dinger
equation with the paradigmatic spin-boson Hamiltonian is obtained. We believe that this
result is a major step ahead and may ultimately contribute to the complete resolution of
the problem in question. We also construct the constant of motion for the spin-boson system.
In contrast to the standard techniques available within the framework of the open quantum
systems theory, our analysis is based on the theory of block operator matrices.
\end{abstract}
\pacs{03.65.Yz, 03.67.-a, 02.30.Tb, 03.65.Db}
\maketitle
\section{Introduction}
The Hamiltonian of the paradigmatic spin-boson (SB) model is specified as~\cite{SB_Fannes, SB_Fannes2,SB_Spohn,SB_Honegger}
\begin{equation}
\label{eq:SB}
\mf{H}_{\text{SB}} = \txt{H}_{\text{S}}\ot\Id{B} + \Id{S}\ot \txt{H}_{\text{B}} + \mf{H}_{\text{int}},
\end{equation}
where
\begin{equation}
\label{eq:S}
\text{H}_{\text{S}} = (\beta\sigma_z+\alpha\sigma_x)
\quad\text{and}\quad
\txt{H}_{\text{B}} = \int_{0}^{\infty}d\omega\, h(\omega)\,\cre{\omega}\ani{\omega},
\end{equation}
are the Hamiltonian of the spin-half (qubit) and the bosonic field (environment), respectively. The interaction between the systems
has the following form
\begin{equation}
\label{eq:CSB}
\mf{H}_{\text{int}} = \sigma_z\ot\int_{0}^{\infty}d\omega \left(g(\omega)^{*}\ani{\omega}+g(\omega)\cre{\omega}\right)
\equiv\sigma_z\ot\txt{V}.
\end{equation}
$\Id{S}$ and $\Id{B}$ are the identity operators in corresponding Hilbert spaces of the qubit and the environment, respectively.
In the above description, $\sigma_z$ and $\sigma_x$ are the standard Pauli matrices. The bosonic creation $\cre{\omega}$ and annihilation
$\ani{\omega}$ operators obey the canonical commutation relation: $[\ani{\omega},\cre{\eta}]=\delta(\omega-\eta)\Id{B}$, for $\omega,
\eta\in[0,\infty)$. The functions $h$, $g\in L^{2}[0,\infty]$ model the energy of the free
bosons and the coupling the bosons with the qubit, respectively. The constants $\alpha$ and $\beta$ are assumed to be real and non
negative numbers. Furthermore, $\beta$ represents the energy gap between the eigenstates $\ket{0}$ and $\ket{1}$ of $\sigma_z$, while
$\alpha$ is responsible for the tunneling between these states. The Hamiltonian~\eref{eq:SB} acts on the total Hilbert space
$\mc{H}_{\text{tot}}=\mb{C}^{2}\ot\mc{F}_{\text{B}}$, where $\mc{F}_{\text{B}}:=\mc{F}(L^{2}[0,\infty])$ is the bosonic Fock space~\cite{SB_Alicki}.
It is worth mentioning that more often we encounter situations in which there is a countable number (finite, in particular) of bosons
(see \eg{}~\cite{Leggett,SB_work,SB_paradigm,SB_parity}). In such cases we define the SB model via the following Hamiltonian
\begin{equation}
\label{eq:DSB}
\mf{H}_{\text{SB}} = (\beta\sigma_z+\alpha\sigma_x)\ot\Id{B} +
\Id{S}\ot\sum_{k}h_ka_k^{\dg}a_k
+
\sigma_z\ot\sum_{k}\left(g_k^{*}a_k+g_ka^{\dg}\right),
\end{equation}
where the creation and annihilation operators $a_k^{\dg}$, $a_k$ satisfy $[a_k,a_{l}^{\dg}]=\delta_{kl}$.
Formally, it is possible to obtain~\eref{eq:DSB} from~\eref{eq:SB} by setting
\begin{equation}
x(\omega) = \sum_{k}x_k\delta(\omega-\omega_k),\quad\txt{where}\quad x=h,g.
\end{equation}
Therefore, we can treat both cases simultaneously. Although generalizations of the SB model (\eg{} asymmetric coupling~\cite{dajka})
are also under intensive investigation, we will not focus on them in this paper.
The problem of a small quantum system coupled to the external degrees of freedom plays an important role in various
fields of modern quantum physics. The SB model provides a simple mathematical description of such coupling in the case
of two-level quantum systems. For instance, an interaction between two-level atoms and the electromagnetic radiation
can be modeled via the SB Hamiltonian~\cite{puri}. For this reason the SB model is of great importance to the modern
quantum optics. There are various physical problems (\eg, decoherence~\cite{zurek,dajka_cat,dajka1,dajka2}, geometric
phase~\cite{dajka_phase}) related to the properties of the model in question, which has already been addressed and intensively
discussed. Nonetheless, an exact solution of the Schr\"{o}dinger equation:
\begin{equation}
\label{eq:Sch}
\rmi\partial_t\ket{\Psi_t} = \mf{H}_{\text{SB}}\ket{\Psi_t}
\quad\text{with}\quad \ket{\Psi_0}\equiv\ket{\Psi},
\end{equation}
is still missing for both $\alpha\not=0$ and $\beta\not=0$. Several approximation methods~\cite{davies} have been developed
in past fifty years to manage this problem. Models obtained from the SB Hamiltonian under mentioned approximations are
well-established and in most cases they are exactly solvable. The famous Jaynes-Cummings model~\cite{GJC} can serve as an example.
Formally, one can always express the solution of~(\ref{eq:Sch}) as $\ket{\Psi_t}=\mf{U}_t\ket{\Psi}$, where
$\mf{U}_t:=\exp(-\rmi\mf{H}_{\text{SB}}t)$ is the time evolution operator (Stone theorem~\cite{simon}). Needless to say, such a form
of the solution is useless for practical purposes.
There is at least one important reason for which a manageable form of the time evolution operator $\mf{U}_t$ is worth seeking for.
Namely, it allows to construct the exact reduced time evolution of the spin immersed within the bosonic bath, the so-called reduced
dynamics~\cite{Alicki}:
\begin{equation}
\label{reduced}
\rho_t = \text{Tr}_{\text{B}}(\mf{U}_t\rho_0\otimes\omega_{\txt{B}}\mf{U}_t^{\dagger}).
\end{equation}
Above, the state $\omega_{\txt{B}}$ is an initial state of the bosonic bath. $\txt{Tr}_{\txt{B}}$ denotes the partial trace, \ie,
$\txt{Tr}_{\txt{B}}(\txt{M}\otimes X)=\txt{M}\txt{Tr}X$, where $\mbox{Tr}$ refers to the usual trace on $\mc{F}_{B}$. For the
sake of simplicity we have assumed that the initial state of the composite system $\rho_{\txt{int}}$ is the tensor product of
the states $\rho_0$ and $\omega_{\txt{B}}$. In other words, no initial correlations between the systems are
present~\cite{korelacje,erratum,KrausRep,dajka3}.
In general, the formula~(\ref{reduced}) is far less useful, than its theoretical simplicity might indicate. Indeed, to trace out
the state $\mf{U}_t\rho_{\txt{int}}\mf{U}_t^{\dagger}$ over the bosonic degrees of freedom, one needs to i) calculate $\mf{U}_t$
and ii) apply the result to the initial state $\rho_{\text{int}}$. Herein we will cover the first step and we will investigate
the ability to accomplish the second one.
In order to write the time evolution operator $\mf{U}_t$ in a computationally accessible form, the diagonalization of its generator
$\mf{H}_{\text{SB}}$ or an appropriate factorization~\cite{faktoryzacja} is required. It can be found (see \eg,~\cite{mgr,RiccEq,gardas2})
that the problem of diagonalization on the Hilbert space $\mb{C}^{2}\ot\mc{F}_{\text{B}}$ can be mapped to the problem of resolving the
Riccati equation~\cite{Vadim}. This new approach was recently successfully applied to the time-dependent spin-spins model~\cite{gardas3}.
As a result, the exact reduced dynamics of the qubit in contact with a spin environment and in the presence of a precessing magnetic
field has been obtained. It is interesting, therefore, to apply this approach to the SB model as well. This paper has been devoted to
accomplish this purpose. Although, an explicit form of the Riccati equation has already been derived~\cite{gardas}, the solution has not
been provided yet. In this paper we derive an exact solution of this equation assuming $\beta=0$.
\section{The block operator matrix representation and the Riccati equation}
We begin by reviewing some basic facts concerning a connection between the theory of block operator matrices~\cite{spectral} and the
SB model. First, the Hamiltonian~(\ref{eq:SB}) admits the block operator matrix representation~\cite{gardas,bom}:
\begin{equation}
\label{eq:bomSB}
\mf{H}_{\text{SB}} =
\begin{bmatrix}
\txt{H}_{\text{B}}+\txt{V}+\beta & \alpha \\
\alpha & \txt{H}_{\text{B}}-\txt{V}-\beta
\end{bmatrix}
\equiv
\begin{bmatrix}
\txt{H}_{\p} & \alpha \\
\alpha & \txt{H}_{\m}
\end{bmatrix},
\end{equation}
with respect to the direct sum decomposition $\mc{H}_{\text{tot}}=\mc{F}_{\text{B}}\oplus\mc{F}_{\text{B}}$ of $\mc{H}_{\text{tot}}$.
The entries $\alpha$ and $\beta$ of the operator matrix~(\ref{eq:bomSB}) are understood as $\alpha\Id{B}$ and $\beta\Id{B}$, respectively.
Henceforward, we use the same abbreviation for any complex number.
The Riccati operator equation associated with the matrix~\eref{eq:bomSB} reads~\cite{gardas}
\begin{equation}
\label{eq:riccatiSB}
\alpha X^{2} + X\txt{H}_{\p}-\txt{H}_{\m}X-\alpha = 0,
\end{equation}
where $X$ is an unknown operator, acting on $\mc{F}_{\txt{B}}$, which needs to be determined. The solution of this equation, if it
exists, can be used to diagonalize the Hamiltonian~\eref{eq:bomSB}. To be more specific, if $X$ solves~(\ref{eq:riccatiSB}) the
following equality holds true
\begin{equation}
\label{eq:diag}
\mf{S}^{-1}\mf{H}_{\text{SB}}\mf{S} =
\begin{bmatrix}
\txt{H}_{\p}+\alpha X & 0 \\
0 & \txt{H}_{\m}-\alpha X^{\dg}
\end{bmatrix},
\quad\text{where}\quad
\mf{S} =
\begin{bmatrix}
1 & -X^{\dg} \\
X & 1
\end{bmatrix}.
\end{equation}
By means of this decomposition we can write $\mf{U}_t$ in an explicit matrix form:
\begin{equation}
\label{eq:evolve}
\mf{U}_t = \mf{S}\mbox{diag}[e^{-\rmi\left(\txt{H}_{\p}+\alpha X\right)t}, e^{-\rmi\left(\txt{H}_{\m}-\alpha
X^{\dagger}\right)t}]\mf{S}^{-1}.
\end{equation}
Note, the last formula reduces the problem of finding the solution of the Schr\"{o}dinger equation~\eref{eq:Sch} to the problem of resolving
the Riccati equation~\eref{eq:riccatiSB}. It is well-established that the reduced dynamics~\eref{reduced} can easily be obtained when
$\alpha=0$~\cite{SB_Alicki}. In this case no additional assumptions on $\beta$ are needed, which should not be surprised since the
matrix~\eref{eq:bomSB} is already in a diagonal form ($X=0$). Moreover, if $\alpha=0$ the qubit does not exchange the energy with the bosonic
field because $[\txt{H}_{\text{S}}\ot\Id{B}, \mf{H}_{\text{SB}}]=0$. Therefore, the only exactly solvable case, which is known at the present time,
represents rather extreme physical situation.
In the next section we will derive an exact solution of the RE~\eref{eq:riccatiSB} assuming $\beta=0$; nevertheless, we not impose any
restrictions on $\alpha$. This is exactly the opposite situation to the one we have discussed above. At this point, the natural question
can be addressed: what about the case, when both $\alpha$ and $\beta$ are not equal to zero? Unfortunately, the answer is still to be
found. In fact, usually the SB model is defined only for $\beta=0$. At first, it might seem that the complexity of the problem is the
same both for $\beta=0$ and $\beta\not=0$. Although, this is indeed true when $\alpha= 0$, no argument proving this conjecture for
$\alpha\not=0$ has been given so far. We will return to this matter at the end of the next section.
\section{Solution of the Riccati equation}
\subsection{Single boson case}
To understand the idea of our approach better let us first consider the case where there is only one boson in the bath~\cite{CJC,JC}.
Then, the Hamiltonian of the SB model can be written by using the block operator matrix nomenclature as ($\beta=0$)
\begin{equation}
\label{eq:singleBOM}
\mf{H}_{\txt{SB}} =
\begin{bmatrix}
\txt{H}_{\m} & \alpha \\
\alpha & \txt{H}_{\p}
\end{bmatrix}
\quad\text{with}\quad
\txt{H}_{\spm} = \omega a^{\dagger}a\pm(g^{\ast}a+ga^{\dagger}).
\end{equation}
The operators $\txt{H}_{\spm}$ can be expressed in a more compact form, that is
\begin{equation}
\label{eq:rules}
\txt{H}_{\m} = \omega\txt{D}_fa^{\dagger}a\txt{D}_{\m f}- E
\quad\text{and}\quad
\txt{H}_{\p} = \omega\txt{D}_{\m f}a^{\dagger}a\txt{D}_f- E,
\end{equation}
where $f=g/\omega$ and $E=|g|^2/\omega$. The displacement operator $\txt{D}_f := \exp(f^*a-fa^{\dg})$
has the following, easy to prove, properties
\begin{equation}
\label{eq: weyl}
\txt{i)}\quad\txt{D}_{-f}=\txt{D}_f^{\dg},\quad\text{ii)}\quad \txt{D}_f\txt{D}_{\m f}=\Id{B}
\quad\txt{and}\quad\text{iii)}\quad
\txt{D}_{f}\txt{D}_{g} = e^{\rmi\Im (fg^*)}\txt{D}_{f+g}.
\end{equation}
$\Im$ stands for the imaginary part of the complex number $fg^*$. The relations~\eref{eq:rules} can be proven by using equality
$\txt{D}_fa\txt{D}_{\m f} = a-f$, which follows from the Baker-Campbell-Hausdorff formula~\cite{galindo1,BCH2}. For the sake of
simplicity and without essential loss of generality we rescale the Hamiltonian~\eref{eq:singleBOM} so that
$\mf{H}_{\text{SB}}\to\mf{H}_{\text{SB}}+E$. This is nothing but a rescaling of the reference point of the total.
After this procedure the Hamiltonian~\eref{eq:singleBOM} takes the form
\begin{equation}
\label{eq:rescal}
\mf{H}_{\txt{SB}} =
\begin{bmatrix}
\omega\txt{D}_fa^{\dagger}a\txt{D}_{\m f} & \alpha \\
\alpha & \omega\txt{D}_{\m f}a^{\dagger}a\txt{D}_f
\end{bmatrix},
\end{equation}
while the corresponding Riccati equation reads
\begin{equation}
\label{eq:riccatiSB2}
\alpha X^{2} + X\left(\omega\txt{D}_fa^{\dagger}a\txt{D}_{\m f}\right)-
\left(\omega\txt{D}_{\m f}a^{\dagger}a\txt{D}_f\right)X-\alpha = 0.
\end{equation}
To solve this equation, let us first define an operator $\txt{P}_{\varphi}$ in a way that
\begin{equation}
\label{eq:Pdef}
\txt{P}_{\varphi} := \exp (\rmi\varphi a^{\dg}a), \quad \varphi\in[0,2\pi).
\end{equation}
It is not difficult to see that
\begin{equation}
\label{eq:prop}
\txt{i)}\quad\txt{P}_{\m\varphi}=\txt{P}_{\varphi}^{\dg},
\quad\txt{ii)}\quad
\txt{P}_{\varphi}\txt{P}_{\m\varphi}=\Id{B}
\quad\text{and}\quad\txt{iii)}\quad
\txt{P}_{\varphi}\txt{P}_{\psi} = \txt{P}_{\varphi\p\psi}.
\end{equation}
Moreover, from the Baker-Campbell-Hausdorff formula we also have $\txt{P}_{\varphi}a\txt{P}_{\m\varphi} = e^{-\rmi\varphi}a$,
which ultimately leads to
\begin{equation}
\label{eq:trans}
\txt{P}_{\varphi}\txt{D}_f\txt{P}_{\m\varphi} = \txt{D}_{e^{\rmi\varphi}f}.
\end{equation}
In what follows, we will prove that $\txt{P}_{\pi}$ solves the Riccati equation~(\ref{eq:riccatiSB2}). First, let us note that
$\txt{P}_{\pi}$ is a function of the number operator $a^{\dagger}a$, thus $[\txt{P}_{\pi},a^{\dagger}a]=0$. In view of~\eref{eq:trans}
we obtain $\txt{P}_{\pi}\txt{D}_f\txt{P}_{\m\pi} = \txt{D}_{\m f}$, hence
\begin{equation}
\label{eq:unit}
\txt{P}_{\pi}\left(\txt{D}_fa^{\dg}a\txt{D}_{\m f}\right) =
\left(\txt{D}_{\m f}a^{\dg}a\txt{D}_f\right)\txt{P}_{\pi}.
\end{equation}
By writing $\txt{P}_{\pi}$ in terms of the eigenstates $\ket{n}$ of $a^{\dg}a$ we obtain
\begin{equation}
\label{eq:parity}
\txt{P}_{\pi} = \sum_{n\in\mb{N}}e^{i\pi n}\ket{n}\bra{n}
= \sum_{n\in\mb{N}}(-1)^n\ket{n}\bra{n},
\end{equation}
where we used the well-known mathematical fact that $a^{\dg}a\ket{n}=n\ket{n}$, for $n\in\mb{N}$. Finally, from~(\ref{eq:parity})
we conclude that $\txt{P}_{\pi}$ is an involution, \ie, $\txt{P}_{\pi}^{2}=\Id{B}$, which together with~\eref{eq:unit} leads to
\begin{equation}
\label{eq:sol}
\alpha\txt{P}_{\pi}^{2}+ \txt{P}_{\pi}\left(\omega\txt{D}_fa^{\dg}a\txt{D}_{\m f}\right)-
\left(\omega\txt{D}_{\m f}a^{\dg}a\txt{D}_f\right)\txt{P}_{\pi}-\alpha = 0.
\end{equation}
Note, $\txt{P}_{\pi}$ transforms the creation $a^{\dg}$ and annihilation $a$ operators into $-a^{\dg}$ and $-a$, respectively.
In other words, $\txt{P}_{\pi}$ can be interpreted as the bosonic parity operator~\cite{Bender}. Moreover, $\txt{P}_{\pi}$
does not depend on the parameter $\alpha$; in particular, $\txt{P}_{\pi}$ remains a nontrivial ($X\not=0$) solution of the
Riccati equation~\eref{eq:riccatiSB2} even when $\alpha=0$ (Sylvester equation).
Now, by means of the parity operator $\text{P}_{\pi}$, we can derive an accessible form of the time evolution operator $\mf{U}_t$.
According to~\eref{eq:diag} and~\eref{eq:evolve} we have
\begin{equation}
\label{eq:esolve}
\mf{U}_{t} =\frac{1}{2}
\begin{bmatrix}
\text{U}_{\p}(t) & \txt{V}_{\p}(t)\txt{P}_{\pi} \\
\txt{V}_{\m}(t)\txt{P}_{\pi} & \txt{U}_{\m}(t)
\end{bmatrix},
\end{equation}
where the quantities $\txt{U}_{\spm}(t)$ and $\txt{V}_{\spm}(t)$ read as follows
\begin{equation}
\txt{U}_{\spm}(t) = e^{-\rmi(\txt{H}_{\spm}+\alpha\parity)t} + e^{-\rmi(\txt{H}_{\spm}-\alpha\parity)t},
\quad
\txt{V}_{\spm}(t) = e^{-\rmi(\txt{H}_{\spm}+\alpha\parity)t} - e^{-\rmi(\txt{H}_{\spm}-\alpha\parity)t}.
\end{equation}
For $\alpha=0$ the formula~\eref{eq:esolve} simplifies to the well-known result~\cite{SB_Alicki}, which can be obtained
independently, without solving the Riccati equation.
It is instructive to see how the bosonic parity operator $\txt{P}_{\pi}$ can also be used to construct the constant of motion for
the SB model. For this purpose let us take $\mf{J}_{\pi}:=\sigma_x\otimes\txt{P}_{\pi}$; then $[\mf{J}_{\pi},\mf{H}_{\txt{SB}}]=0$,
thus from the Heisenberg equations of motion follows $\dot{\mf{J}}_{\pi}=0$, which means that $\mf{J}_{\pi}$ does not vary with
time. Since $\txt{P}_{\pi}$ is an involution, \ie, $\txt{P}_{\pi}^2=\Id{B}$ thus $\mf{J}_{\pi}$ is an involution as well. Therefore,
$\mf{J}_{\pi}$ can be seen as the parity operator of the total system. In conclusion, the total parity is conserved when $\beta=0$.
For $\beta\not=0$ the parity symmetry of the total system is broken and the Riccati equation~(\ref{eq:riccatiSB2}) cannot be solved
by applying a similar method to the one we have used above in the case of $\beta=0$. From mathematical point of view, the problem
arises because the diagonal entries $\txt{H}_{\txt{B}}\pm V\pm\beta$ are no longer related by an unitary transformation. Indeed, if
the converse would be true, there would then exist an unitary operator $\txt{W}$ such that $\txt{W}^{\dg}\left(\txt{H}_{\txt{B}}+
V+\beta\right)\txt{W} =\txt{H}_{\txt{B}}-V-\beta$. Thereby the spectra $\sigma\left(\txt{H}_{\txt{B}}\pm V\pm\beta\right)=
\sigma(\txt{H}_{\txt{B}}\pm V)\cup\{\pm\beta\}$ would be the same, which clearly is impossible unless $\beta=0$. As a result, for
$\alpha\not=0$ one can expect that the mathematical complexity of the SB model is different within the regimes $\beta=0$ and $\beta\not=0$.
\subsection{Generalization}
The results of the preceding subsection can be generalized to the case where there is more that one boson in the bath. In order to
achieve this objective one needs to redefine the displacement operator $\txt{D}_f$ in the following way
\begin{equation}
\label{eq:Rweyl}
\txt{D}_f\rightarrow\exp\left(A-A^{\dg}\right),
\quad\text{where}\quad
A = \sum_{k} \frac{g_k^*}{\omega_k}a_k.
\end{equation}
Then, the solution of the Riccati equation reads
\begin{equation}
\label{eq:Rparity}
\txt{P} = \exp(\rmi\pi\sum_ka_k^{\dg}a_k)
=\bigotimes_{k}\txt{P}_{\pi,k},
\quad\text{where}\quad \txt{P}_{\pi,k} = \exp(i\pi a_k^{\dagger}a_k).
\end{equation}
\section{Remarks and Summary}
In this article, we have solved the Riccati operator equation associated with the Hamiltonian
of the paradigmatic spin-boson model. Next, in terms of the solution we have derived an explicit matrix
form of the time evolution operator of the total system. This, in particular, allows us to solve the
Schr\"{o}dinger equation~(\ref{eq:Sch}). We wish to emphasize that in order to obtain the reduced
dynamics~(\ref{reduced}) one more step is required. Namely, the terms like
\begin{equation}
\label{trace}
\mbox{Tr}(e^{-i[\txt{H}_{\spm}\pm\alpha\txt{P}_{\pi}]t}\omega_{\txt{B}}e^{i[\txt{H}_{\spm}\pm\alpha\txt{P}_{\pi}]t})
\end{equation}
need to be determined.
Of course, one can always evaluate the quantities given above by using \eg, perturbation
theory. However, the true challenge is to establish this goal without approximations.
It seems that the simplest way to do so is to solve the eigenvalue problem
$(\txt{H}_{\spm}\pm\alpha\txt{P}_{\pi})\ket{\psi}=\lambda\ket{\psi}$. The ability to solve
this eigenproblem separates us from deriving the exact reduced dynamics of the qubit immersed
within the bosonic bath. We stress that for $\alpha\not=0$ the problem is nontrivial since the
qubit exchange the energy with its environment. Moreover, an impact on the mathematical complexity
of the model, has not only a transfer of the energy between the systems but also the energy
split between the states $\ket{0}$, $\ket{1}$.
Interestingly, the Riccati equation is a second order operator equation, thus one can expect
that its solution involves a square root. In particular, nothing indicates that the solution should be
linear as it is in our case. Therefore, we not only solved the Riccati equation~(\ref{eq:riccatiSB})
but also linearized the solution. At this point, a worthwhile question can be posed: is it a
coincidence that the linear operator happens to solve a nonlinear equation? Perhaps, it is a
manifestation of some additional structure in the model. Historically, a similar situation took place
when Dirac solved the problem with a negative probability by introducing His famous equation~\cite{dirac}.
By linearizing the Hamiltonian of the relativistic electron, Dirac not only predicted an existence of
the antiparticles but also explained the origin of the additional degree of freedom of the electron.
\ack
The author would like to thank Jerzy Dajka for helpful comments and suggestions.
\section*{References} | {"config": "arxiv", "file": "1103.4383/spin_boson.tex"} |
\section{Notation and Problem Formulation}
\label{sec:problem_formulation}
In this section the minimax optimal sequential testing problem is defined, and some common notations are introduced. Notations not covered here are defined when they occur in the text.
\subsection{Notation}
\label{ssec:notation}
Random variables are denoted by uppercase letters and their realizations by lowercase letters. Analogously, probability distributions are denoted by uppercase letters and their densities by the corresponding lowercase letters. Blackboard bold is used to indicate product measures. Measurable sets are denoted by tuples $(\Omega, \Filter)$. Boldface lowercase letters are used to indicate vectors; no distinction is made between row and column vectors. The inner product of two vectors, $\bm{x}$ and $\bm{y}$, is denoted by $\langle \bm{x}, \bm{y} \rangle$ and the element-wise product by $\bm{xy}$. All comparisons between vectors are defined element-wise. The indicator function of a set $\mathcal{A}$ is denoted by $\mathcal{I}(\mathcal{A})$. All comparisons between functions are defined point-wise.
The notation $\partial_{y_k} f(\bm{y})$ is used for the subdifferential \cite[\S 23]{Rockafellar1970} of a convex function $f \colon \mathcal{Y} \subset \mathbb{R}^K \to \mathbb{R}$ with respect to $y_k$ evaluated at $\bm{y}$, that is,
\begin{equation}
\partial_{y_k} f(\bm{y}) \coloneqq \left\{\, c \in \mathbb{R} : f(\bm{y}') - f(\bm{y}) \geq c \, (y'_k-y_k) \quad \forall \bm{y}' \in \mathcal{Y}_k(\bm{y}) \, \right\},
\label{eq:partial_diff}
\end{equation}
with $\mathcal{Y}_k(\bm{y}) \coloneqq \left\{\, \bm{y}' \in \mathcal{Y} : y_j' = y_j \quad \forall j \in \{1, \ldots, K\} \setminus \{ k \} \,\right\}$. The superdifferential of a concave function is defined analogously. Both are referred to as generalized differentials in what follows. The length of the interval corresponding to $\partial_{y_k} f(\bm{y})$ is denoted by
\begin{equation}
\lvert \partial_{y_k} f(\bm{y}) \rvert \coloneqq \sup_{a,b \in \partial_{y_k} f(\bm{y})} \; \lvert a - b \rvert.
\end{equation}
If a function $f_{y_k}$ exists such that $f_{y_k}(\bm{y}) \in \partial_{y_k} f(\bm{y}) \; \forall \bm{y} \in \mathcal{Y}$, then $f_{y_k}$ is called a partial generalized derivative of $f$ with respect to $y_k$. The set of all partial generalized derivatives, $f_{y_k}$, is denoted by
$\partial_{y_k} f$.
\subsection{Underlying stochastic process}
\label{ssec:underlying_stochastic_process}
Let $\bigl( X_n \bigr)_{n \geq 1}$ be a discrete-time sto\-chas\-tic process with values in $(\Omega_X, \Filter_X)$. The joint distribution of $\bigl( X_n \bigr)_{n \geq 1}$ on the cylinder set
\begin{equation}
(\Omega_X^{\mathbb{N}}, \Filter_X^{\mathbb{N}}) \coloneqq \left( \prod_{n \geq 1} \Omega_X, \prod_{n \geq 1} \Filter_X \right)
\end{equation}
is denoted by $\Pbb$, the conditional or marginal distributions of an individual random variable $X$ on $(\Omega_X, \Filter_X)$ by $P$ and the natural filtration \cite[Definition~2.32]{Capasso_Bakstein_2015} of the process $\bigl( X_n \bigr)_{n \geq 1}$ by $\bigl( \Filter_X^n \bigr)_{n \geq 1}$. In order to balance generality and tractability, the analysis in this paper is limited to stochastic processes that satisfy the following three assumptions.
\begin{description}
\item[Assumption 1:] The process $\bigl( X_n \bigr)_{n \geq 1}$ admits a time-homogeneous Markovian representation. That is, there exists a $(\Omega_\Theta, \Filter_\Theta)$-valued stochastic process $\bigl( \Theta_n \bigr)_{n \geq 0}$ adapted to $\bigl( \Filter_X^n \bigr)_{n \geq 1}$ such that
\begin{align}
\Pbb(X_{n+1} \in \mathcal{E} \mid X_1 = x_1, \ldots, X_n = x_n) &= \Pbb(X_{n+1} \in \mathcal{E} \mid \Theta_n = \theta_n) \notag \\
&\eqqcolon P_{\theta_n}(\mathcal{E}).
\label{def:markov_property}
\end{align}
for all $n \geq 1$ and all $\mathcal{E} \in \Filter_X$. The distribution of $X_1$ is denoted by $P_{\theta_0}$, where $\theta_0$ is assumed to be deterministic and known \emph{a priori}. An extension to randomly initialized $\theta_0$ should not be hard but will not be entered.
\item[Assumption 2:] There exists a function $\xi\colon \Omega_\Theta \times \Omega_X \rightarrow \Omega_\Theta$ that is measurable with respect to $\Filter_\Theta \otimes \Filter_X$ and that satisfies
\begin{equation}
\Theta_{n+1} = \xi(\Theta_n,X_{n+1}) \eqqcolon \xi_{\Theta_n}(X_{n+1})
\label{eq:update_process_statistic}
\end{equation}
for all $n \geq 0$.
\item[Assumption 3:] For all $\theta \in \Omega_\Theta$, the probability measure $P_{\theta}$, defined in Assumption~1, admits a density $p_{\theta}$ with respect to some $\sigma$-finite reference measure $\mu$.
\end{description}
The set of distributions $\Pbb$ on $(\Omega_X^{\mathbb{N}}, \Filter_X^{\mathbb{N}})$ that satisfy these three assumptions is denoted by $\mathbb{M}$. The set of distributions $P$ on $(\Omega_X, \Filter_X)$ that admit densities with respect to $\mu$ is denoted by $\mathcal{M}_\mu$.
The above assumptions are rather mild and are introduced primarily to simplify the presentation of the results. In general, the sufficient statistic $\Theta$ can be chosen as a sliding window of past samples, that is, $\Theta_n = (X_{n-m}, \ldots, X_n)$, where $m$ is a finite positive integer. Hence, the presented results apply to every discrete-time Markov process of finite order. However, in order to implement the test in practice, $\Omega_\Theta$ should be sufficiently low-dimensional (compare the examples in Section~\ref{sec:examples}). As long as the existence of the corresponding densities is guaranteed, the reference measure $\mu$ in Assumption~3 can be chosen arbitrarily. This aspect can be exploited to simplify the numerical design of minimax sequential tests and is discussed in more detail in Sections~\ref{sec:discussion} and \ref{sec:examples}.
\subsection{Uncertainty model and hypotheses}
\label{ssec:uncertainty_model_and_hypotheses}
For general Markov processes the question of how to model distributional uncertainty is non-trivial and has far-reaching implications on the definition of minimax robustness. In the most general case the joint distribution $\Pbb$ is subject to uncertainty. However, defining meaningful uncertainty models for $\Pbb$ is an intricate task and is usually neither feasible nor desirable. An approach that is more tractable and more useful in practice is to assume that at any given time instant, $n \geq 1$, the \emph{marginal} or \emph{conditional} distribution of $X_n$ is subject to uncertainty.
In this paper it is assumed that the conditional distributions $P_\theta$, as defined in \eqref{def:markov_property}, are subject to uncertainty. More precisely, for each $\theta \in \Omega_\Theta$ the conditional distribution $P_{\theta}$ is replaced by an \emph{uncertainty set} of feasible distributions $\Pcal_{\theta} \subset \mathcal{M}_\mu$. This model induces an uncertainty set for $\Pbb$ which is given by
\begin{equation}
\Pcal \coloneqq \left\{ \Pbb \in \mathbb{M} : \Pbb = \prod_{n \geq 0} P_{\theta_n}, \; P_{\theta_n} \in \Pcal_{\theta_n} \right\}
\label{eq:uncertainty_set}
\end{equation}
and is completely specified by the corresponding family of uncertainty sets for the conditional distributions $\{ \Pcal_{\theta} : \theta \in \Omega_\Theta\}$.
The goal of this paper is to characterize and design minimax optimal sequential tests for multiple hypotheses under the assumption that under each hypothesis the distribution is subject to the type of uncertainty introduced above. That is, each hypothesis is given by
\begin{equation}
\mathcal{H}_k \colon \Pbb \in \Pcal_k, \quad k = 1, \ldots, K,
\label{eq:hypotheses}
\end{equation}
where all $\Pcal_k$ are of the form \eqref{eq:uncertainty_set} and are defined by a corresponding family of conditional uncertainty sets $\{ \Pcal_{\theta}^{(k)} : \theta \in \Omega_\Theta\}$. Note that the parameter $\theta$, which corresponds to the sufficient statistic in Assumption~2, does not depend on $k$, that is, the statistic needs to be chosen such that it is sufficient under all hypotheses. Finally, the set $\Pcal_0$ defines the uncertainty in the distribution under which the expected run length is supposed to be minimum. It is assumed to be of the form \eqref{eq:uncertainty_set} as well. In practice, one is often interested in minimizing the worst case expected run length under one hypothesis or under any hypotheses, that is, $\Pcal_0 = \Pcal_k$ for some $k \in \{1, \ldots, K\}$ or $\Pcal_0 = \bigcup_{k=1}^K \Pcal_k$, respectively. In principle, however, $\Pcal_0$ can be chosen freely by the test designer.
Before proceeding, it is useful to illustrate the assumptions on the underlying stochastic process and the proposed uncertainty model with an example. Consider an exponentially weighted moving-average process, that is,
\begin{equation}
X_{n+1} = \sum_{l=1}^{\infty} a^l X_{n+1-l} + W_{n+1},
\end{equation}
where $a \in (-1,1)$ is a known scalar and $\bigl( W_n \bigr)_{n \geq 1}$ is a sequence of independent random variables that are identically distributed according to $P_W$. This process can equivalently be written as
\begin{equation}
X_{n+1} = a \Theta_n + W_{n+1},
\end{equation}
where the sufficient statistic $\Theta_n$ can be updated recursively via
\begin{equation}
\Theta_{n+1} = \xi_{\Theta_n}(X_{n+1}) = \Theta_n + X_{n+1}.
\end{equation}
In order to introduce uncertainty, it is assumed that with probability $\varepsilon$ the increment $W_n$ is replaced by an arbitrarily distributed outlier. This model yields the following family of conditional uncertainty sets
\begin{equation}
\Pcal_{\theta} = \left\{ P \in \mathcal{M}_\mu : P(\mathcal{E}) = (1-\varepsilon) P_W(\mathcal{E}-a\theta) + \varepsilon H(\mathcal{E}), \; H \in \mathcal{M}_\mu \right\},
\end{equation}
where $\mathcal{E} \in \Filter_X$ and $\mathcal{E}-a\theta$ is shorthand for $\{ x \in \Omega_X : x+a\theta \in \mathcal{E} \}$. In Section~\ref{sec:examples}, a variant of this example is used to illustrate the design of a minimax optimal test with dependencies in the underlying stochastic process.
\subsection{Testing policies and test statistics}
\label{ssec:Stopping_Rules_decision_rules_test_statistics}
A sequential test is specified via two sequences of randomized decision rules, $\bigl( \psi_n \bigr)_{n \geq 1}$ and $\bigl( \bm{\delta}_n \bigr)_{n \geq 1}$, that are adapted to the filtration $\bigl( \Filter_X^n \bigr)_{n \geq 1}$. Each $\psi_n \colon \Omega_X^n \to [0,1]$ denotes the probability of stopping at time instant $n$. Each $\bm{\delta}_n \colon \Omega_X^n \to \Delta^K$ is a $K$-dimensional vector, $\bm{\delta}_n = (\delta_{1,n}, \ldots, \delta_{K,n})$, whose $k$th element denotes the probability of deciding for $\mathcal{H}_k$, given that the test has stopped at time instant $n$. The randomization is assumed to be performed by independently drawing from a Bernoulli distribution with success probability $\psi_n$ and a discrete distribution on $\{1, \ldots, K\}$ with associated probabilities $(\delta_{1,n}, \ldots, \delta_{K,n})$, respectively. The set of randomized $K$-dimensional decision rules defined on $(\Omega_X^n,\Filter_X^n)$ is denoted by $\Delta_n^{K}$. The stopping time of the test is denoted by $\tau = \tau(\psi)$.
For the sake of a more concise notation, let $\pi = \bigl( \pi_n \bigr)_{n \geq 1}$ with $\pi_n = (\psi_n, \bm{\delta}_n) \in \Delta_n^1 \times \Delta_n^K$ denote a sequence of tuples of stopping and decision rules. In what follows, $\pi$ is referred to as a \emph{testing policy}, and the set of all feasible policies is denoted by $\Pi \coloneqq \bigtimes_{n \geq 1} \left( \Delta_n^1 \times \Delta_n^K \right)$.
A test statistic is a stochastic process, $\bigl( T_n \bigr)_{n \geq 0}$, that is adapted to the filtration $\bigl( \Filter_X^n \bigr)_{n \geq 1}$ and allows the stopping and decision rules to be defined as functions mapping from the codomain of $T_n$ to the unit interval. Of particular importance for this paper is the case where the sequence of test statistics, $\bigl( T_n \bigr)_{n \geq 0}$, is itself a time-homogeneous Markov process and the stopping and decision rules are independent of the time index $n$. The corresponding testing policies are in the following referred to as time-homogeneous. This property is formalized in the following definition.
\begin{definition}
A policy $\pi \in \Pi$ is referred to as \emph{time-homogeneous} if there exists a $(\Omega_T, \Filter_T)$-valued stochastic process $\bigl( T_n \bigr)_{n \geq 0}$ that is adapted to the filtration $\bigl( \Filter_X^n \bigr)_{n \geq 1}$ and it holds that
\begin{equation}\label{eq:time-homogeneous_policy}
\psi_n = \psi(T_n) \qquad \text{and} \qquad \bm{\delta}_n = \bm{\delta}(T_n),
\end{equation}
where the functions $\psi \colon \Omega_T \to [0,1]$ and $\bm{\delta} \colon \Omega_T \to [0,1]^K$ are independent of the index $n$.
\label{def:time-homogeneous_policy}
\end{definition}
Focusing on time-homogeneous policies significantly simplifies both the derivation and the presentation of the main results. Moreover, it will become clear in the course of the paper that optimal tests for time-homogeneous Markov processes can always be realized with time-homogeneous policies so that this restriction can be made without sacrificing optimality.
\subsection{Performance metrics and problem formulation}
\label{ssec:performance_metrics_and_problem_formulation}
The performance metrics considered in this paper are the probability of erroneously rejecting the $k$th hypothesis, $\alpha_k$, and the expected run length of the sequential test, $\gamma$. Both are defined as functions of the testing policy and the true distribution:
\begin{align}
\gamma(\pi,\Pbb) &\coloneqq E_{\pi,\Pbb}\bigl[\, \tau(\psi) \,\bigr],
\label{eq:runlength} \\
\alpha_k(\pi,\Pbb) &\coloneqq E_{\pi,\Pbb}\bigl[\, 1-\delta_{k,\tau} \,\bigr],
\label{eq:errors}
\end{align}
with $k = 1, \ldots, K$. Here, $E_{\pi,\Pbb}$ denotes the expected value taken jointly with respect to the distribution of $\bigl( X_n \bigr)_{n \geq 1}$ and the randomized policy $\pi$. A generalization to performance metrics that are defined in terms of the pairwise error probabilities is possible but would considerably complicate notation while adding little conceptual insight.
It is important to note that for the design of robust sequential tests the error probabilities and the expected run length need to be treated as equally important performance metrics. On the one hand, reducing the sample size is typically the reason for using sequential tests in the first place. On the other hand, a test whose error probabilities remain bounded over a given uncertainty set, but whose expected run length can increase arbitrarily, cannot be considered robust. In other words, a robust test should not be allowed to delay a decision indefinitely in order to avoid making a wrong decision.
The first optimality criterion considered in this paper is the weighted sum cost, that is,
\begin{equation}
L_{\bm{\lambda}}(\pi,\pmb{\Pbb}) = \gamma(\pi,\Pbb_0) + \sum_{k=1}^K \lambda_k \alpha_k(\pi,\Pbb_k),
\label{eq:weighted_sum_cost}
\end{equation}
where $\pmb{\Pbb} = (\Pbb_0, \ldots, \Pbb_K)$ denotes a $K+1$ dimensional vector of distributions and $\bm{\lambda} = (\lambda_1, \ldots, \lambda_K)$ denotes a $K$ dimensional vector of non-negative cost coefficients. The minimax problem corresponding to the cost function in
\eqref{eq:weighted_sum_cost} reads as
\begin{equation}
\adjustlimits \inf_{\pi \in \Pi} \sup_{\pmb{\Pbb} \in \bm{\Pcal}} \; L_{\bm{\lambda}}(\pi,\pmb{\Pbb}),
\label{eq:unconstrained_minimax}
\end{equation}
where $\pmb{\Pbb} \in \bm{\Pcal}$ is used as a compact notation for $\Pbb_k \in \Pcal_k$, $k = 0, \ldots, K$.
The second optimality criterion is the expected run length under constraints on the error probabilities. The corresponding minimax problem reads as
\begin{equation}
\adjustlimits\inf_{\pi \in \Pi} \sup_{\Pbb_0 \in \Pcal_0} \gamma(\pi,\Pbb_0)
\quad \text{s.t.} \quad
\sup_{\Pbb_k \in \Pcal_k} \alpha_k(\pi,\Pbb_k) \leq \overline{\alpha}_k,
\label{eq:constrained_minimax}
\end{equation}
where the constraint holds for all $k = 1, \ldots, K$ and $\overline{\alpha}_k$ denotes an upper bound on the probability of erroneously deciding against $\mathcal{H}_k$. The notation for the minimax optimal policies is fixed below and concludes the section.
\begin{definition}
The set of time-homogeneous policies that are optimal in the sense of \eqref{eq:unconstrained_minimax} is denoted by $\PiLambda^*(\bm{\Pcal})$. The set of time-homogeneous policies that are optimal in the sense of \eqref{eq:constrained_minimax} is denoted by $\PiAlpha^*(\bm{\Pcal})$.
\label{def:minimax_time-homogeneous_policies}
\end{definition} | {"config": "arxiv", "file": "1811.04286/02-problem_formulation.tex"} |
TITLE: Why intuitively do the quaternions satisfy the mixture of geometric and algebraic properties that they do?
QUESTION [9 upvotes]: [I completely rewrote the question to see if I could make it clearer. The comments below won't make any sense. In fact, my original question has been answered by Eric Wolfsey, so I may restore it.]
When you read about the quaternions on Wikipedia and on many other sources, they are defined with the relation $$i^2 = j^2 = k^2 = ijk = -1$$
This comes across as completely random. There is no explanation of where this relation comes from.
These sources then go on to prove that the quaternions satisfy various algebraic and geometric properties. Among these properties, they act on 3D vectors as rotations (but as a double-covering), they act on 4D space as rotations, they're associative, they're distributive. Etc. All of this comes across as a random coincidence when you compare it to the defining relation.
Some of the time quaternions are thought of as 3D rotations. But this is a lossy interpretation, as a quaternion $q$ and its negation $-q$ represent the same rotation. For similar reasons, addition of quaternions starts to seem more bizarre.
When defined as 4D rotations, they happen to be an "isoclinic rotation". I haven't thought enough about this concept...
When defined as a complex matrix that satisfies a linear algebra relationship (sourced from here: https://qchu.wordpress.com/2011/02/12/su2-and-the-quaternions/), closure under multiplication is baked in, but closure under addition comes across as a coincidence. This relation is: $$M\text{ is a $2\times2$ matrix over $\mathbb C$}, M^\dagger M \in \mathbb R, \det(M) \geq 0 $$
Geometric Algebra textbooks show that they're an instance of a larger family of algebras with similar interpretations, called Geometric Algebra (or Clifford Algebras). But then why do Clifford Algebras exist? Why do they have their mixture of algebraic and geometric properties? It feels like you're replacing a small mystery with an even bigger mystery.
REPLY [6 votes]: The idea is that rotations have a certain rigidity in two dimensions: given a basis $\{e_1,e_2\}$, once you know where a rotation sends $e_1$, the image of $e_2$ is uniquely determined. Moreover, the conditions that uniquely define the image of $e_2$ can be written linearly in the image of $e_1$ (if $\{e_1,e_2\}$ is an orthogonal basis, then the conditions are that the image of $e_2$ must be orthogonal to the image of $e_1$ and must make the matrix have determinant $1$). As a result, scalar multiples of rotations form a linear subspace of all matrices.
Here's how it works in detail. Let $V=\mathbb{R}^2$. Consider the bilinear map $\det:V\times V\to\mathbb{R}$ which takes a pair of vectors $(v,w)$ to the determinant of the matrix with columns $v$ and $w$. We can think of this bilinear instead as a linear map $V\to V^*$, or, using the canonical isomorphism $V^*\cong V$ given by the inner product, as a linear map $J:V\to V$. Explicitly, $J(v)$ is the unique vector which satisfies $\langle w,J(v)\rangle=\det(v,w)$ for all $w\in V$. Even more explicitly, if $\{e_1,e_2\}$ are the standard basis for $V$, then $J(e_1)=e_2$ and $J(e_2)=-e_1$ (so $J$ is in fact just multiplication by $i$ when we identify $ae_1+be_2$ with $a+bi$).
The key observation is now that a matrix $T$ with columns $v$ and $w$ is a scalar multiple of an element of $SO(2)$ iff $w=J(v)$. To prove this, we may scale $T$ to assume $v$ is a unit vector (if $v=0$ it is easy to see both conditions are equivalent to $w=0$). Then $T$ is a scalar multiple of an element of $SO(2)$ iff it is an element of $SO(2)$ iff $w$ is orthogonal to $v$ and $\det(v,w)=1$. Note that since the orthogonal complement of $v$ is $1$-dimensional, there is a unique $w$ orthogonal to $v$ such that $\det(v,w)=1$. Now notice that $J(v)$ is orthogonal to $v$ since $\langle v,J(v)\rangle=\det(v,v)=0$ and that $\det(v,J(v))=\langle v,v\rangle=1$. So in fact $J(v)$ is the unique $w$ that will make $T$ a scalar multiple of an element of $SO(2)$. Thus $T$ is a scalar multiple of an element of $SO(2)$ iff $w=J(v)$.
Finally, the set of matrices which satisfy $w=J(v)$ is obviously closed under addition, and so we conclude that so is the set of scalar multiples of elements of $SO(2)$. Moreover, we see that the map taking such a matrix to its first column is a linear isomorphism. In this way we cover the standard identification between $\mathbb{C}$ (considered as the set of scalar multiples of elements of $SO(2)$) and $\mathbb{R}^2$.
[This discussion can be made more basis-free in various ways. In particular, instead of explicitly talking about the columns of matrices, you can fix a unit vector in $\bigwedge^2 V$, use that unit vector to get an isomorphism $\bigwedge^2 V\cong\mathbb{R}$ and replace $\det$ by the wedge product map $V\times V\to \bigwedge^2 V\cong\mathbb{R}$ when defining $J$. Then, one direction of the second part of the argument can be recast as a proof that if $T$ is a scalar multiple of an element of $SO(2)$ then $T$ commutes with $J$. For the converse, I don't see a way to completely avoid using a basis.]
For quaternions and $SU(2)$, the story is the same, except that you take $V=\mathbb{C}^2$ and $J$ is conjugate-linear instead of linear (since the inner product is sesquilinear). | {"set_name": "stack_exchange", "score": 9, "question_id": 3196043} |
TITLE: how to calculate the quotient group of $U/DU$ where $U$ is an arbitrary unitary matrices, and $DU$ is diagonal unitary matrices?
QUESTION [0 upvotes]: Question is the calculation of $U/DU$ where $U$ is a unitary matrices with non-degenerate eigenvalues, and $DU$ are diagonal unitary matrices with non-degenerate eigenvalues.
The answer is $UA_{0}U^{-1}\cong U/DU$ where $A_0$ is diagonal matrix with eigenvalues of $1,\ 1/2,\ 1/3\ ...\ 1/n$.
How ever I can see how one can calculate this the quetient is defined as follows: $G/H$ is a group its elements are the equivalence clases $[g]$
where the equivalence is defined by $g\sim g '$ if $g'=gh$. I cant find the above expression by using this definition. The original source that I encountered this problem is this;
REPLY [2 votes]: [Reference] Wikipedia: Homogeneous space and Wikipedia: Group action
Note. I will use a general notation for the unitary groups.
Let $U(n)$ be the set of unitary matrices, and let $DU(n)$ be the set of unitary diagonal matrices.
Note. $DU(n)$ is a closed subgroup of the Lie group $U(n)$ so that the quotient group $U(n)/DU(n)$ is a smooth manifold.
Let $\mathfrak{M}\equiv\{ UA_0U^{-1} : U\in U(n) \}$ where $A_0$ is the diagonal matrix with entries $1,\frac{1}{2},\dotsc,\frac{1}{n}$.
Notice that $\mathfrak{M}$ is not a group, since $\mathfrak{M}$ does not contain the identity matrix $I$.
The unitary group $U(n)$ acts on $\mathfrak{M}$ (by conjugation)
$$
U(n) \times \mathfrak{M} \to \mathfrak{M} \quad\text{defined by}\quad
(V, UA_0U^{-1}) \mapsto V(UA_0U^{-1})V^{-1}
$$
for $V\in U(n)$ and $UA_0U^{-1}\in\mathfrak{M}$. In general the action of $V$ on $UA_0U^{-1}$ is denoted by $V\cdot(UA_0U^{-1})$.
(1) The action is well-defined since
$$
V\cdot(UA_0U^{-1}) \equiv V(UA_0U^{-1})V^{-1} = (VU)A_0(VU)^{-1} \in \mathfrak{M}
$$
(2) (Identity) For the identity matrix $I$, we have
$$
I\cdot(UA_0U^{-1}) \equiv I(UA_0U^{-1})I^{-1} = UA_0U^{-1}
$$
(3) (Compatibility) For $V,W\in U(n)$, we have
$$
\begin{align*}
(VW)\cdot(UA_0U^{-1}) &\equiv (VW)(UA_0U^{-1})(VW)^{-1} \\
&= V(W(UA_0U^{-1})W^{-1})V^{-1} = V\cdot(W\cdot(UA_0U^{-1}))
\end{align*}
$$
(4) Moreover the action is differentiable because it consists of matrix multiplications.
Claim. The homogeneous space $\mathfrak{M}$ is diffeomorphic to the quotient group $U(n)/DU(n)$.
$\mathfrak{M}$ is a homogeneous space.
A homogeneous space for a (Lie) group $G$ is a non-empty (smooth manifold) $X$ on which $G$ acts transitively (from Wikipedia: Homogeneous space).
In our case, the action of $U(n)$ on $\mathfrak{M}$ is faithful so that "$\mathfrak{M}$ is a homogeneous space for $U(n)$" is equivalent to "$\mathfrak{M}$ is a single $U(n)$-orbit". But this is trivial by the definition of
$$
\mathfrak{M}=\{ U\cdot A_0\equiv UA_0U^{-1} : U\in U(n) \} = \text{$U(n)\cdot A_0$ (the $U(n)$-orbit of $A_0$) }
$$
$\mathfrak{M}\cong U(n)/DU(n)$
For a fixed $x\in X$, consider the map from $G$ to $X$ given by $g\mapsto g\cdot x$ for all $g\in G$. The image of this map is $G\cdot x$, the $G$-orbit of $x$. The standard quotient theorem of set theory then gives a natural bijection between $G/G_x$ and $G\cdot x$. Here $G_x=\{g\in G : g\cdot x=x\}$ denotes the stabilizer subgroup of $G$ with respect to $x$ (from Wikipedia: Group action).
The statement above is very basic fact. Moreover, the bijection becomes a diffeomorphism when $G$ is a Lie group. In our case, since $\mathfrak{M}$ is a single $U(n)$-orbit of $A_0$, we have
$$
\mathfrak{M} \cong U(n)/U(n)_{A_0}
$$
It remains to show that the stabilizer $U(n)_{A_0}=\{V\in U(n) : V\cdot A_0=A_0\}$ is nothing but $DU(n)$. It is easy to check using matrix multiplication and leave it to an exercise. | {"set_name": "stack_exchange", "score": 0, "question_id": 2686342} |
\section{Random walk bridges}\label{sec:ale}
\subsection{Bridge of the random walk on \texorpdfstring{${\{0,1 \}}^d$}{}}\label{subsec:bridge01}
\subsubsection{Setting and notation}\label{subsubsec:not_hyper}
In this Subsection we are interested in studying a continuous time random walk on the hypercube ${\{0,\,1\}}^d$, $d\ge 1$. We assume that the walker jumps in the direction $e_i$ with rate $\alpha_i\geq 0$, $i=1,\ldots,d$.
To obtain bounds on this object, we will start with the walk on $\{0,\,1\}$ and then use the fact that the random walk on the $d$-dimensional hypercube is a product of $1$-dimensional random walks.
We denote by $P$ the law on the space of c\`adl\`ag paths $\bbD([0,1];\{ 0,\,1\})$ of the continuous time random walk $X$ on $\{ 0,\, 1\}$ with jump rate $\alpha$ and time horizon $T=1$. The bridge of the random walk from and to the origin is given by
\[
P^{00}(\cdot) \ldef P(\,\cdot \,| X_0 =0,\, X_1=0).
\]
We observe that the space of c\`adl\`ag paths with initial and terminal point at the origin, which we denote by $\bbD_0([0,1];\{ 0,\,1\})$, is in bijection with the set of all subsets of $(0,1)$ with even cardinality,
\[
\mathscr{U}:= \{ U \subseteq (0,1):\, |U|<+\infty,\, |U| \in 2\N \},
\]
where, for a set $A$, $|A|$ denotes its cardinality.
In fact, the bijection is simply given by the map $\bbU : \bbD_0([0,1];\{ 0,\,1\})\to \mathscr{U}$ that associates to each path its jump times; we denote its inverse by $\bbX \ldef \bbU^{-1}$.
We shall endow $\mathscr{U}$ with the $\sigma$-algebra $\cU$ induced by $\bbU$, that is, we say that $A\in \cU$ if and only if $\bbX^{-1}(A)$ belongs to the Borel $\sigma$-algebra of $\bbD_0([0,1];\{ 0,\,1\})$.
With a little abuse of notation, we will still denote by $P^{00}$ the probability measure on $(\mathscr{U},\cU)$ given by the pushforward of $P^{00}$ via $\bbU$.
Note that since $\bbU$ is only defined on $\bbD_0([0,1];\{0,1\}) \subsetneq \bbD([0,1];\{0,1\})$ and $P^{00}$ is a measure on $\bbD([0,1];\{0,1\})$, the pushforward may not be well-defined.
However here we do not have to worry since $P^{00}$ is supported on $\bbD_0([0,1];\{0,1\})$.
In order to characterize $P^{00}$ as the unique invariant distribution of a given generator, we introduce a set of perturbations of $\mathscr{U}$ which allows the complete exploration of the support. For $r\neq s \in (0,1)$, we define $\Psi_{r,s} : \mathscr{U}\to \mathscr{U}$ by
\[
\Psi_{r,s}(U) \ldef \begin{cases}
U\cup \{r,s\}, & \mbox{ if } \{r,s\}\cap U = \emptyset, \\
U\setminus \{r,s\}, & \mbox{ if } \{r,s\}\subset U, \\
U, & \mbox{ otherwise. }
\end{cases}.
\]
\begin{remark}
Let $U\in \mathscr{U}$ be the set of jump times of a sample path $X\in \bbD_0([0,1];\{ 0,\,1\})$. It is easy to see that $\Psi_{r,s} U$, $r<s$,
corresponds to the path $X+\mathbbm{1}_{[r, s)}$ if $\{r,s\}\cap U = \emptyset$, to $X-\mathbbm{1}_{[r, s)}$ if $\{ r,s\}\subset U$ and to $X$ otherwise.
\end{remark}
For convenience in the exposition, we will need the following additional notation.
\begin{itemize}
\item $\mathscr{A} := \{ (r,s)\in {(0,1)}^2:\, r<s\}$.
\item For $U \in \scrU$, we denote by ${[U]}^2:= \{ (r,s) \in \mathscr{A}:\, r,s\in U \}$. In words, ${[U]}^2$ is the set of pairs of elements of $U$.
\end{itemize}
\paragraph{Choice of the distance}\label{par:choice_dist}
We equip $\scrU$ with the graph distance $d$ induced by $\Psi$. That is, we say that $U$ and $V$ are at distance one if and only if there exist $(r,s)\in \mathscr{A} $ such that $ \Psi_{r,s}V = U$.
The distance between two arbitrary trajectories $U,V\in\mathscr{U}$ is defined to be the length of shortest path joining them.
It is worth to remark that $\mathscr{U}$ is a highly non-trivial graph, as every vertex has uncountably many neighbors. Nonetheless, the distance is well-defined: by removing one pair after the other, we notice that any $U\in \mathscr{U}$ has distance $|U|/2$ from the empty set.
It follows in particular that the graph is connected.
\subsubsection{Identification of the generator}
As stated in the Introduction, our goal is to obtain a Markovian dynamics stemming from a change-of-measure formula, already present in~\citet[Example 30]{CR}. We can exploit it to obtain the following proposition.
\begin{prop}\label{prop:hypergen}$P^{00}$ is the only invariant measure of a Markov process ${\{ U_t \}}_{t\geq0}$ on $\scrU$ whose generator is
\begin{equation}\label{eq:hypergen}
\mathscr{L}f(U):= \alpha^2\int_{\mathscr{A}} \left(f(\Psi_{r,s} U ) - f(U) \right)\De r \De s + \sum_{A \in {[U]}^2}\left( f(\Psi_{A}U)-f(U)\right)
\end{equation}
for all $f:\mathscr{U}\to \bbR$ bounded measurable functions.
\end{prop}
\begin{proof}
To show that $P^{00}$ is invariant for $\mathscr{L}$, we show that for any bounded measurable function
\[ E_{P^{00}}( \mathscr{L} f ) =0, \] which yields the conclusion.
An application of~\citet[Theorem 12]{CR} gives a characterization of $P^{00}$ as the only measure on $\bbD([0,1];\{ 0,1\})$ such that $P^{00}(X_0=X_1=0)=1$ and for all bounded measurable functions $F: \bbD([0,1];\{ 0,1\} ) \times \scrA \rightarrow \bbR$
\[\alpha^2 E_{P^{00}} \left( \int_{\mathscr{A}} F(X + \mathbbm{1}_{[r,s)}, r,s) \De s \De r \right) = E_{P^{00}} \left( \sum_{r<s, r,s \in \bbU(X) } F(X,r,s) \right), \]
where the symbol $+$ stands for the sum in $\bbZ/2\bbZ$ and
\begin{equation*}
{(X+\mathbbm{1}_{[r,s)})}_t= \begin{cases} X_t+1 \quad & \mbox{if $ t \in [r,s)$ } \\
X_t \quad & \mbox{otherwise.}
\end{cases}
\end{equation*}
Passing to the image measure, that is, considering functionals of the type $F(X,r,s)=G(\bbU(X),r,s)$ we obtain that for all $G:\mathscr{U}\times \scrA\rightarrow \bbR$ bounded and measurable
\begin{equation}\label{eq:hypergen1} \alpha^2 E_{P^{00}} \left( \int_{\mathscr{A}} G(\Psi_{r,s}U,r,s) \De s \De r \right) = E_{P^{00}} \left( \sum_{ (r,s) \in {[U]}^2 } G(U,r,s) \right),
\end{equation}
where we took advantage of the fact that, for any $X \in \bbD_0([0,1];\{0,1\})$, we have that $\bbU\big(X+ \mathbbm{1}_{[r,s)}\Big) = \Psi_{r,s}U $ for almost every $r,s \in \scrA$.
If we now fix $f:\scrU \rightarrow \bbR$ bounded and measurable, define $ G(U,r,s) = f(U) -f(\Psi_{r,s}U)$ and plug it back into~\eqref{eq:hypergen1}, we obtain the desired result, observing that $\Psi_{r,s}(\Psi_{r,s} U) = U$. We do not prove uniqueness here, as it is implied by Proposition~\ref{prop:lip}, which we prove later.
\end{proof}
In the next pages we will construct explicitly a dynamics ${(U_t)}_{t\geq 0}$ starting in $U\in \mathscr{U}$ whose infinitesimal generator is $\mathscr L$. We denote by $\bbP^U$ the law of such process, by $\bbE^U$ the corresponding expectation and, for any $f:\mathscr{U}\to \bbR$ bounded measurable function, by
\[
S_t f(U) \ldef \bbE^U\left[f(U_t)\right]
\]
the semigroup associated to ${(U_t)}_{t\geq 0}$.
The proof that $\mathscr{L}$ is characterizing boils down to showing that for any $f \in \mathrm{Lip}_1(\mathscr{U})$ such that $E_{P^{00}}[f]=0$, the Stein equation
\[
\mathscr{L} g = f,
\]
has a solution.
This is achieved with the following fundamental proposition.
\begin{prop}\label{prop:lip} For any $f\in \mathrm{Lip}_1(\mathscr U)$, all $U,V\in \mathscr{U}$ and all $t\geq 0$
\begin{equation}\label{eq:bound}
\left| S_t f(U)-S_t f(V)\right| \leq (4\exp(-t/2)+\exp(-t)) d(U,V).
\end{equation}
\end{prop}
The proof of Proposition~\ref{prop:lip} is based on a coupling argument. It will suffice to construct two Markov chains ${(U_t)}_{t\geq 0}$, ${(V_t)}_{t\geq 0}$ with generator $\mathscr{L}$ starting from neighbouring points $U,V\in \mathscr{U}$ such that $U_t$, $V_t$ are at most at distance two and coalesce within an exponentially distributed time.
As a remarkable consequence of Proposition~\ref{prop:lip} we can show that for any probability measure $\nu\in \cP(\mathscr{U})$,
the measure $\nu_\# S_t$, determined by $\nu_\# S_t(A) \ldef E_\nu[S_t \mathbbm{1}_A]$, converges exponentially fast to $P^{00}$ in the $1$-Wasserstein distance on $(\mathscr{U},d)$.
In particular, this implies that for any $f\in \mathrm{Lip}_1(\mathscr{U})$ with $E_{P^{00}}[f]=0$ the function
\begin{equation}\label{eq:sol}
g(U)\ldef -\int_0^\infty S_t f(U)\,\De t,\quad U\in \mathscr{U}
\end{equation}
is well-defined and solves the Stein equation $\mathscr{L} g = f$ (see Proposition~\ref{prop:SteinSol} below). This allows for the following quantitative estimate of the distance between two bridges of random walks on the hypercube with different jump rates.
\begin{prop}\label{prop:1wassone} Let $P^{00}$ and $Q^{00}$ be the law on $\mathscr{U}$ of the bridges from and to the origin of random walks on $\{ 0, 1\}$ with rates $\alpha$ and $\beta$ respectively. Then,
\[
d_{W,\,1}\left(P^{00},\,Q^{00}\right)\le \frac{9}{2}\left|\alpha^2-\beta^2\right|.
\]
\end{prop}
\begin{proof} We shall see that the proof is an easy application of Proposition~\ref{prop:lip} and of~\eqref{eq:sol}. To simplify notation, let us write $P$ and $Q$ rather than $P^{00}$ and $Q^{00}$.
Let $\mathscr L^Q$, $\mathscr L^P$ be as in~\eqref{eq:hypergen} with associated semigroup ${(S^Q_t)}_{t\ge 0}$ and ${(S^P_t)}_{t\ge 0}$.
By definition of Wasserstein distance we have that
\[
d_{W,1}(P,Q) = \sup_{f\in \mathrm{Lip}_1(\mathscr{U}), E_{P}[f]=0} \left|E_{Q}[f]\right|.
\]
Next, fix any $f\in \mathrm{Lip}_1(\mathscr{U})$ such that $E_{P}[f]=0$. We have that $\mathscr{L}^P g = f$, where $g$ is given by~\eqref{eq:sol}.
Using $E_Q[\mathscr{L}^Q g]=0$, Tonelli's theorem and invoking Proposition~\ref{prop:lip} we deduce that
\begin{align*}
\left|E_{Q}[f]\right| & =\left|E_{}[\mathscr L^Q g-\mathscr L^P g]\right| \\ & \le \left|\alpha^2-\beta^2\right|\int_0^\infty E_{Q}\left[\int_{\mathscr{A}} |S^P_t f(\Psi_{r,s}U)-S^P_t f(U)|\,\De r \De s\right]\, \De t \\
&\leq \left|\alpha^2-\beta^2\right| \int_0^\infty \int_\mathscr{A} 4\exp(-t/2)+\exp(-t) \,\De r \De s\, \De t\\
& = \frac{9}{2}\left|\alpha^2-\beta^2\right|,
\end{align*}
which is a uniform bound in $f$ and thus proves the Proposition.
\end{proof}
\begin{remark}
The bound obtained is compatible with what is known about conditional equivalence. In fact it is shown in~\citet{CR} that two random walks on $\{ 0,1\}$ with jump rates $\alpha$ and $\beta$ have the same bridges if and only if $\alpha=\beta$.
\end{remark}
\begin{remark} Clearly, the same inequality of Proposition~\ref{prop:1wassone} holds also for $P^{00}$ and $Q^{00}$ as measures on
$\bbD_0([0,1];\{ 0,1\})$ with metric $d_\bbD(X,Y) \ldef d(\bbU(X),\bbU(Y))$ for all paths $X,Y\in \bbD_0([0,1];\{ 0,1\})$.
Here, $\bbU$ is the bijection between $\bbD_0([0,1];\{ 0,1\})$ and $\mathscr{U}$ described above.
\end{remark}
\begin{remark}[Extensions]
The scope of application of Proposition~\ref{prop:lip} and Proposition~\ref{prop:hypergen} can go well beyond comparing two walks with homogeneous jump rates.
Arguing as in Subsection~\ref{subsec:revCTRW} and Subsection~\ref{subsec:schemesRW}, it is possible to derive distance bounds between simple random walk bridges on the hypercube and bridges of random walks with non-homogeneous and possibly time-dependent rates, as well as to show convergence rates for certain approximation schemes. Another extension one may want to consider is to bridges whose terminal point is different from the origin.
For brevity we do not include in this paper such bounds as they do not present any additional difficulty with respect to those for bridges of walks on $\bbZ$.
\end{remark}
Proposition~\ref{prop:1wassone} can be easily extended to random walks on the $d$-dimensional hypercube. In fact, we have the following corollary of which we only sketch the proof.
\begin{cor}\label{cor:higher_dim}
Let $d\ge 2$ and let $P^{0,\,d}$ and $Q^{0,\,d}$ be the laws of two bridges of random walks on ${\{0,\,1 \}}^d$ with jump rates $\alpha_i$ resp. $\beta_i$ in the direction $e_i$, for $i = 1,\ldots,d$. Then
\[
d_{W,\,1}(P^{0,\,d},\,Q^{0,\,d})\le \frac{9}{2}\sum_{i=1}^d\left|\alpha_i^2-\beta_i^2\right|.
\]
where the Wasserstein distance is taken on $(\mathscr{U}^d,d_{\mathscr{U}^d})$ with the metric given by
\[
d_{\mathscr{U}^d}(U,V) \ldef \sum_{i=1}^d d(U_i,\,V_i),\quad U,V\in\mathscr{U}^d.
\]
\end{cor}
\begin{proof} The proof is in fact a straightforward consequence of the fact that the random walk on the $d$-dimensional hypercube is just a product of one-dimensional walks.
This allows to construct a dynamic on $\mathscr{U}^d$ by considering simply $d$ independent processes ${(U_{1,t},\ldots,U_{d,t})}_{t\geq 0}$ with ${(U_{i,t})}_{t\geq 0}$ associated to a generator $\mathscr{L}^i$ as in Proposition~\ref{prop:hypergen} with parameter $\alpha_i$.
The generator of ${(U_{1,t},\ldots,U_{d,t})}_{t\geq 0}$ is then just $\mathscr{L}f(U) \ldef \sum_i^d \mathscr{L}^i f(U)$, with $\mathscr{L}^i$ acting only on the $i$-th coordinate. This allows to conclude together with the estimate~\eqref{eq:bound}.
\end{proof}
The next subsections are devoted to the proofs of Proposition~\ref{prop:hypergen} and Proposition~\ref{prop:lip}.
\newline
\paragraph{Holding times and jump kernel}\label{par:holding_times}
A continuous time Markov chain can equivalently be described via its generator or through a function
$c: U \rightarrow \bbR_+$ and a jump kernel $\mu\ldef{\{\mu_{U}(\cdot)\}}_{U \in \mathscr{U}} \subseteq \cP(\scrU)$. Once these have been chosen the Markov dynamics is obtained by the following simple rules~\cite[Chapter 9, Section 3]{Bre13Markov}:
\begin{itemize}
\item the chain sits in its current state $U$ for a time which is exponentially distributed with parameter $c(U)$, and then makes a jump.
\item The next state is chosen according to the probability law $\mu_U$.
\end{itemize}
We call this dynamics a $(c,\mu)$-Markov chain.
In the next lines we shall define a pair $(c,\mu)$ for describing our Markov chain. We define $c$ via
\[
c(U) := \frac{\alpha^2}{2}+\binom{|U|}{2}.
\]
To define $\mu_U$, we first introduce the measure $\lambda_U \in \cP(\scrU)$ through
\[
E_{\lambda_{U}} (f) := \int_{\mathscr{A}} f(\Psi_{ u,v }U ) \De u \De v
\]
and then
\begin{equation}\label{def:jumpkernel}
\mu_{U} := \frac{1}{c(U)} \Big(\alpha^2 \lambda_U+\sum_{A \in {[U]}^2 } \delta_{\Psi_A U}\Big)
\end{equation}
Let us note that $\mu_{U}$ is supported on the set
\[ N(U) := {\{\Psi_{A}U \}}_{A \in \mathscr{A}} . \]
In the following Subsection we construct concretely a Markov process which is a $(c,\mu)$-Markov chain. We show in particular that a $(c,\mu)$-Markov chain has generator $\mathscr{L}$ (see Proposition~\ref{prop:hypergen}).
An informal description of a Markov chain ${(U_t)}_{t \geq 0}$ admitting $\mathscr{L}$ as generator is as follows: at any time, each pair of points in $U_t$ dies at rate $1$ and a new pair of uniformly distributed points in ${[0,1]}^2$ is added to $U$ at rate $\alpha^2$.
When one of such events occurs, everything starts afresh.
\subsubsection{Construction of the dynamics}\label{subs:condynhyp} To prove Proposition~\ref{prop:hypergen} we will employ a rather constructive approach.
More precisely, we build a $(c,\mu)$-Markov chain inductively by defining all the interarrival times, we will then show that such a process has generator $\mathscr{L}$.
This approach is rather convenient since it allows to construct couplings which we use to perform ``convergence to equilibrium'' estimates. Finally, these estimates will be used to solve Stein equation and show uniqueness of the invariant distribution for $\mathscr{L}$.
We begin by defining noise sources, that is, by introducing the clocks that force points to appear or disappear in $U_t$.
\paragraph{Noise sources}\label{par:noise_sources}
We consider a probability space $({\Omega},\,{\cF},\,{\bbP})$ on which a family $\Xi:={\{ \xi^A \}}_{A \in \mathscr{A} }$ of independent Poisson processes of rate $1$ each is defined, together with a Poisson random measure $\beta $ on $[0,+\infty) \times \mathscr{A}$, which is independent from the family ${\{\xi^A \}}_{A \in \mathscr{A}} $
and whose intensity measure is $ \alpha^2 \lambda \otimes \lambda_{\mathscr{A}}$, where $\lambda$ is the Lebesgue measure on $[0,+\infty)$ and $\lambda_{\scrA}$ is the measure on $\scrA$ given by
\[
\lambda_{\scrA}(\bsA) = \int_{\bsA \cap \mathscr{A}} \De u \De v
\]
for all Borel sets $\bsA\in \mathscr{B}(\mathscr{A})$ (recall that $\mathscr{A}$ is an open subset of ${(0,1)}^2$).
For each $A\in \mathscr{A}$, the process $\xi^A$ will account for the dying clock of the pair $A$, and the process $\beta$ will indicate the rate at which a pair of new jump times is added.
The canonical filtration is defined as usual as
\[
\cF_t \ldef \sigma\Big( {\{ \xi^A_s \}}_{s\leq t , A \in \mathscr{A}} \cup {\{ \beta_s(B)\}}_{s \leq t, B \in \mathscr{B}(\mathscr{A})}\Big)
\]
where we use the notation ${\{ \beta_s(B) \}}_{s\geq 0}$ for $\beta([0,s] \times B)$. We remark that $\beta_s(B)$ is a Poisson process with intensity $\alpha^2\lambda_{\mathscr{A}}(B)$. Similarly, we define for any $t$ the family ${\{ \xi^{A,t} \}}_{A \in \mathscr{A}}$ and the random measure $\beta^t$ via:
\begin{equation}
\xi^{A,t}_s := \xi^A_{t+s} - \xi^{A}_t, \quad \beta^t_s(B):= \beta( (t,t+s] \times B )\label{eq:xi_with_indices} \end{equation}
$\Xi^t$ is then ${\{ \xi^{A,t} \}}_{A \in \mathscr{A}}$. Moreover, for any $\mathscr{A}'\subseteq \mathscr{A} $, and any $t>0$ we define
\[ \Xi^{\mathscr{A}',t}={ \{\xi^{A,t}\}}_{A \in \mathscr{A}'} . \]
The following proposition is a version of the Markov property for $\Xi$ and $\beta$~\cite[Chapter~9, Section~1.1]{Bre13Markov}.
\begin{lemma}\label{lem:Markovppty}
Let $T$ be a stopping time for the filtration ${(\cF_t)}_{t\ge 0}$ and let $\cF_T$ be the associated sigma algebra. Then $(\Xi^T,\beta^T )$ is independent from $\cF_T$ and distributed as $(\Xi,\beta)$.
\end{lemma}
For any finite $F\subseteq \mathscr{A}$ we define
\[
\tau(F)=\tau( \Xi^{F}, \beta ) := \inf \{ t\geq 0:\, \xi^A_t = 1 \, \text{for some $A \in F$ or $\beta_t(\mathscr{A})$ =1}\}
\]
and, shortening $\tau( \Xi^F, \beta )$ as $\tau$,
\begin{equation*}
\bsA(F)=\bsA(\Xi^{F},\beta) \ldef \begin{cases} A \quad & \mbox{if $\beta_{\tau}(\{ A\}) =1$ for some $A \in \scrA$ } \\ \text{argmax}_{A \in F} \, \xi^A_{\tau} \quad &
\mbox{otherwise} \end{cases}
\end{equation*}
In words, $\tau(F)$ is the first time when one between the Poisson processes ${\{\xi^A\}}_{A \in F}$ and $\beta(\mathscr A)$ jumps and $\bsA(F)$ identifies which Poisson process has jumped at first.
The following is obtained as an application of the competition theorem for Poisson processes (see e.g.~\citet[Chapter 8, Theorem 1.3]{Bre13Markov}).
\begin{prop}\label{thm:competition}
Let $U \in \scrU$. Then, for any $t \geq 0$, $\cO\subset N(U)$ measurable, we have
\[
\bbP\left(\Psi_{\bsA({[U]}^2)} U \in \cO, \tau({[U]}^2) \geq t\right)=\mu_{U}(\cO) \exp(-c(U)t ) .
\]
\end{prop}
\begin{proof}
Observe that it is enough to show the statement for measurable sets $\mathscr{O}$ for which there exist $\cA_1, \cA_2 \in \cB(\mathscr{A})$ such that
\[
\cO = {\{ \Psi_A U \}}_{A \in \cA_1} \cup {\{ \Psi_{A}U \}}_{ A \in \cA_2}, \quad \cA_1 \subseteq {[U]}^2,\,
\cA_2 \subseteq \mathscr{A} \setminus {[U]}^2 .
\]
By linearity, we can restrict the attention to the cases when $|\cA_1|=1,\cA_2=\emptyset$ or $\cA_1=\emptyset$.
Let us start by analyzing the first case. Pick $A \in {[U]}^2$ and consequently let $\cO := \Psi_{A}U $. We have, by definition of $\bsA$,
\[
\left \{\Psi_{\bsA({[U]}^2)}U \in \cO\right \} = \left \{ \bsA({[U]}^2) = A \right \} = \left \{ \xi^A_{\tau({[U]}^2)} =1 \right \}.
\]
First, recall that ${\{\xi^A_t\}}_{A \in {[U]}^2}$, $\beta_t(\scrA)$ are independent Poisson process with rates $1$ and $\alpha^2/2$ respectively. Therefore,
\begin{align}
\bbP\left(\xi^A_{\tau({[U]}^2)} =1,\,\tau({[U]}^2) \geq t \right) & =
\frac{1}{ |{[U]}^2| +\alpha^2/2 } \exp( -t(|{[U]}^2| +\alpha^2/2 ) )\nonumber \\
& = \frac{1}{c(U)} \exp(-c(U)t).\label{eq:page6}
\end{align}
On the other hand, since $A\in {[U]}^2$, it is easy to verify that $\mu_U(\Psi_A U )= {1}/{c(U)}$ from~\eqref{def:jumpkernel}.
Let us now consider the second case, that is, $\cO = \{ \Psi_{u,v}U \,:\,(u,v) \in \cA_2\}$ and $\cA_2\in \cB(\cA)$ such that $\cA_2 \cap {[U]}^2 = \emptyset$. We have
\[
\left \{ \Psi_{\bsA({[U]}^2)}U \in \cO \right \} = \left \{ \bsA({[U]}^2) \in \cA_2 \right \} = \left \{ \beta_{\tau({[U]}^2)}(\cA_2)=1 \right \}.
\]
As before, the processes ${\{\xi^A\}}_{A\in[U^2]}$, $\beta(\cA_2 )$ and $ \beta( \mathscr{A} \setminus \cA_2 )$
are independent Poisson processes with rates $1$, $\alpha^2\lambda_{\mathscr{A}}(\cA_2)$ and $\alpha^2\lambda_{\mathscr{A}}(\mathscr{A} \setminus \cA_2 )$ respectively. Therefore
\begin{align*} \bbP \left( \beta_{\tau({[U]}^2)}(\cA_2) = 1,\, \tau({[U]}^2) \geq t \right) & = \frac{ \alpha^2\lambda_{\mathscr{A}}(\cA_2) }{c(U)} \exp( -c(U)t ).
\end{align*}
Now let us compute $\mu_U(\cO)$. Since $\cA_2 \cap {[U]}^2=\emptyset$ we have
\begin{equation*}
\mu_{U}(\cO) = \frac{\alpha^2}{c(U)} \int_{\mathscr{A}} \mathbbm{1}_{\cO}(\Psi_{u,v}U )\De u \De v=
\frac{\alpha^2}{c(U)}\int_{\mathscr{A}\cap \cA_2} \De u \De v = \frac{\alpha^2\lambda_{\mathscr{A}}(\cA_2)}{c(U)}
\end{equation*}
which is the desired conclusion.
\end{proof}
In the previous Lemma, we set up how the first step of the $(c,\mu)$-chain works. We proceed by defining the successive steps by induction. For a given $U\in \mathscr{U}$, we first set $T^U_0 :=0, Z_{0}:=U$. We then set recursively the jump times
\begin{equation}
\label{def:Uchain} T^{U}_{n+1} -T^{U}_n := \tau\Big(\Xi^{{[Z_n]}^2,\,T^U_n},\beta^{T^U_n}\Big),\quad \bsA^{U}_{n+1}:= \bsA\Big( \Xi^{{[Z_n]}^{2}, \,T^U_n} ,\beta^{T^U_n}\Big)
\end{equation}
and the jump chain
\[
Z_{n+1} := \Psi_{\bsA^{U}_{n+1}}Z_n,\quad n\geq 0.
\]
In words, $T_{n+1}^U$ is the first instant after $T_n^U$ when one between the clocks $\xi^A$ with $A\in {[U_{T_n}]}^2$ and $\beta(\mathscr A)$ rings, while $\bsA^U_{n+1}$ represents the corresponding pair which is going to be respectively added or removed.
Finally we define the continuous time process ${(U_t)}_{t\geq 0}$ by
\begin{equation}\label{def2:Uchain}
U_t \ldef Z_{n},\quad\mbox{if}\quad T^U_n\leq t< T^U_{n+1},\, n\geq0.
\end{equation}
For all $U\in\mathscr{U}$, we denote by $\bbP^U$ the law of ${(U_t)}_{t\geq 0}$ on $\bbD(\bbR_+,\mathscr{U})$ and by $\bbE^U$ the corresponding expectation.
\begin{lemma}\label{lem:genident}The process ${(U_t)}_{t\geq 0}$ defined in~\eqref{def2:Uchain} is a $(c,\mu)$-Markov chain with $\mathscr{L}$ as generator and $P^{00}$ as invariant distribution.
\end{lemma}
\begin{proof}
We first show that ${(U_t)}_{t \geq 0}$ is a $(c,\mu)$-Markov chain and then that its generator is $\mathscr{L}$.
In the proof, since there is no ambiguity, we drop the superscript $U$ from $T^U_n$ and $\bsA^U_{n}$, that is, we simply write $T_n$ and $\bsA_n$.
We shall prove that for any $n \in \N$, any bounded and measurable $f:\scrU \rightarrow \R $ and $t \geq 0$, we have almost surely that
\begin{equation}\label{eq:Markovchain1} \bbE\Big[f(U_{T_{n+1}} ) \mathbbm{1}_{\{T_{n+1}-T_n \geq t\}} \Big| \mathcal{F}_{T_n} \Big] = E_{\mu_{U_{T_n}}}[f] \exp\big(-c(U_{T_n}) t \big),
\end{equation}
where $\bbE$ denotes the expectation with respect to $\bbP$.
From~\eqref{eq:Markovchain1}, by choosing $f\equiv 1$ we obtain that, conditionally on $ \cF_{T_n}$, $T_{n+1}-T_n$ is distributed as an exponential random variable of parameter $c(U_{T_n})$.
By setting $t=0$ we obtain that, conditionally on $\mathcal{F}_{T_n} $, $U_{T_{n+1}}$ is chosen according to $\mu_{U_{T_n}}$. By letting $f$ and $t$ vary, we also get that conditionally on $\mathcal{F}_{T_n}$, $T_{n+1}-T_n $ is independent from $U_{T_{n+1}}$.
Now let us observe that, by construction, $\bbP [U_{T_{n+1}} \in N(U_{T_n})]=1$. Moreover, we also have that $\mu_{U_{T_n}}$ is supported on $ N(U_{T_n})$.
As a consequence we can reduce ourselves to proving~\eqref{eq:Markovchain1} when $f$ supported on $N(U_{T_n})$. In particular, it suffices to check the formula for $f(U) = \mathbbm{1}_{\{U \in \cO \}}$ for some measurable $\cO \subseteq N(U)$.
Using~\eqref{def:Uchain} we can rewrite
\begin{align*} \bbE & \Big[f(U_{T_{n+1}} ) \mathbbm{1}_{\{T_{n+1}-T_n \geq t\}} \Big| \mathcal{F}_{T_n} \Big] \\
& = \bbP\Big[ \Psi_{\bsA(\Xi^{{[U_{T_n}]}^2,T_n},\beta^{T_n})}U_{T_n} \in \cO, \tau(\Xi^{{[U_{T_n}]}^2,T_n},\beta^{T_n}) \geq t \vert \mathcal{F}_{T_n} \Big].\end{align*}
Thanks to the Markov property of Lemma~\ref{lem:Markovppty}, $(\Xi^{T_n},\beta^{T_n})$ is distributed as $(\Xi,\beta)$ and independent from $\cF_{T_n}$.
Therefore, we can apply Proposition~\ref{thm:competition} to conclude that
\begin{align*}
\bbP\Big[ \Psi_{\bsA(\Xi^{{[U_{T_n}]}^2,T_n},\beta^{T_n})}U_{T_n} \in \cO, \tau(\Xi^{{[U_{T_n}]}^2,T_n},\beta^{T_n}) \geq t \vert \mathcal{F}_{T_n} \Big] \\
= \mu_{U_{T_n} }( \cO) \exp(-c(U_{T_n})t )
\end{align*}
which is what we wanted to prove.
We now show that ${(U_t)}_{t \geq 0}$ admits $\mathscr{L}$ as generator. Let $f:\mathscr{U}\to \bbR$ be bounded and measurable.
Using that $T_1\sim \mathrm{Exp}(-c(U))$ and that $T_2-T_1\sim \mathrm{Exp}(-c(U_{T_1}))$ conditionally to $\cF_{T_1}$, it is not hard to show that the chance that there are two or more jumps before time $t$ is $\bbP[T_2\leq t]=O\left(t^2\right)$. Thus,
\[
\left|\bbE\left[(f(U_{t}) - f(U))\mathbbm{1}_{\{ T_2\leq t\}}\right]\right|\le 2\|f\|_{\infty}O\left(t^2\right).
\]
Therefore,
\begin{align}
\lim_{t\downarrow 0} & \frac{\bbE[f(U_t) - f(U)]}{t} \nonumber \\
& =\lim_{t\downarrow 0}\frac{\bbE \left[(f(U_{T_1}) - f(U))\mathbbm{1}_{\{T_1\le t<T_2\}}\right]}{t}
\stackrel{\eqref{eq:Markovchain1}}{=}\lim_{t\downarrow 0}\frac{ (E_{\mu_U}(f)-f(U)) \left(1-\e^{-c(U)t}\right)}{t} \nonumber \\
& =\alpha^2\int_{\mathscr{A}} \left[f(\Psi_{r,s} U)-f(U)\right] \De r \De s +\sum_{A \in {[U]}^2} (f(\Psi_A U )-f(U)) = \mathscr{L}f(U)\label{eq:compu_with_generator}
\end{align}
and we conclude.
\end{proof}
\subsubsection{Construction and analysis of the coupling}\label{subsec:coupling_hyper}
In this paragraph we aim at constructing a coupling between two Markov chains associated to $\mathscr{L}$ which start from neighboring points in $\mathscr{U}$.
We begin by fixing a pair $U,\,V\in \mathscr{U}$ such that $ \Psi_{r,s}V=U$ with $r<s$, $r,\,s\notin V$ (see Figure~\ref{fig:coup1}). Next, we define the Markov chain ${(U_t)}_{t \geq 0 }$ such that $U_0 = U$ as we did in the former Subsection.
\begin{figure}[ht!]
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.42,-1.02) rectangle (5.94,5.32);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U$};
\draw (5.54,0.62) node[anchor=north west] {$V$};
\draw [line width=1.2pt] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt] (4.,2.)-- (4.,0.);
\draw (-2.48,4.62) node[anchor=north west] {$r$};
\draw (-0.42,4.62) node[anchor=north west] {$s$};
\end{tikzpicture}
\caption{$U=\Psi_{r,\,s}V$.}\label{fig:coup1}
\end{figure}
To construct the chain ${(V_t)}_{t \geq 0} $ started at $V\in \mathscr{U}$
we use the same noise sources that determined ${(U_t)}_{t\geq0}$.
More precisely, pairs that are added or removed from $V_t$ are exactly those added or removed from $U_t$ up to the time $T_m^U$ that a pair containing either $r$ or $s$ is removed from $U_t$. At time $T_m^U$ we have two possibilities:
\begin{itemize}
\item[1)] the pair $(r,s)$ is removed from $U_{t}$. As $(r,s)$ does not belong to $V_t$, the two processes now coincide and will continue moving together.
\item[2)] either $(r,u)$ or $(u,s)$, $u\notin \{ r,s\}$, is removed from $U_{t}$, say for the sake of example $(r,u)$ is removed. Nothing happens to $V_t$ at time $T_m^U$.
At later times, pairs that are added or removed from $V_t$ are exactly those added or removed from $U_t$ up to the time $T_M^U \geq T_m^U$ when a pair containing $s$, say $(v,s)$ for some $v$, is removed from $U_t$.
At this time the pair $(u,v)$ is removed from $V_t$. Now the two processes coincide and will move henceforth together.
\end{itemize}
Let us now describe the above construction more rigorously.
Clearly, there is a bijection between $\mathscr{A}$ and the set of unordered pairs $\{ \{ u,v\}:\,u\neq v,\,u,\,v\in(0,1)\}$. With a little abuse of notation, we will at times regard $A\in \mathscr{A}$ as a subset of $(0,1)$ with two elements.
First recall the notation in~\eqref{def:Uchain}. We define the random variable
\[
m := \min \{ k : \{ r,s\} \nsubseteq U_{T^U_k}\}.
\]
In this way,
\begin{equation}\label{def:TUm}
T^U_{m}: = \inf \{ t \geq 0 : \{ r,s\} \nsubseteq U_t \}
\end{equation}
is the first time a clock associated to a pair present in $U$ but not in $V$ rings. Further, define
\[
\zeta := \{r,s \} \setminus \bsA^U_{m} , \quad \eta := \bsA^U_{m} \setminus \{r,s\}.
\]
$\zeta$ represents the point between $r$ and $s$ (if any) which is not removed at time $T^U_m$, and $\eta$ is the point that was removed together with $\{ r,\,s\}\setminus\zeta$ (see Figure~\ref{fig:syncrocoupl}).
Note that the sets $\zeta$ and $\eta$ have at most one element, when they are non-empty we shall at times regard them as elements of $(0,1)$. We define
\[
M:= \min \{ k \geq m : \zeta \cap U_{T^U_k} = \emptyset \}.
\]
In this way,
\begin{equation}\label{def:TUM}
T^U_M = \inf \{t \geq T^U_{m } : \zeta \cap U_t = \emptyset \},
\end{equation}
is the first time after $T_m^U$ when a clock involving $\zeta$ rings.
\begin{figure}[!htb]
\centering\subfloat[Clock of $(r,\,u)$ rings, $u \neq s$. Here $\zeta=s$, $\eta=u$.]{
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-0.58) rectangle (6.64,5.92);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U_{T^U_m}$};
\draw (5.54,0.62) node[anchor=north west] {$V_{T^U_m}$};
\draw [line width=1.2pt] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt] (4.,2.)-- (4.,0.);
\draw (-2.48,4.62) node[anchor=north west] {r};
\draw (-0.42,4.62) node[anchor=north west] {$s=\zeta$};
\draw (1.26,4.62) node[anchor=north west] {$u=\eta$ };
\draw (4.26,4.62) node[anchor=north west] {};
\draw (-4.2,5.5) node[anchor=north west] {\showclock{3}{00}};
\draw [shift={(-2.26,4.98)}] plot[domain=2.9812172096138427:4.769469762790954,variable=\t]({1.*1.377679207943562*cos(\t r)+0.*1.377679207943562*sin(\t r)},{0.*1.377679207943562*cos(\t r)+1.*1.377679207943562*sin(\t r)});
\draw [->] (-2.2796791235706597,3.6024613500538396) -- (-2.1,3.64);
\draw [->] (-3.62,5.2) to [out=70] (0.8,4);
\end{tikzpicture}
}
\quad
\subfloat[The clocks of $(\zeta,\,v)$ in $U_t$ and $(\eta,\,v)$ in $V_t$ are synchronised after $(r,\,\eta)$ dies at $U_{T^U_m}$.]{
\definecolor{qqqqff}{rgb}{0.,0.,1.}
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-0.58) rectangle (6.64,5.92);
\draw [line width=1.2pt,dashed] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt,dashed] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U_{T^U_M}$};
\draw (5.54,0.62) node[anchor=north west] {$V_{T^U_M}$};
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt] (4.,2.)-- (4.,0.);
\draw (-0.42,4.36) node[anchor=north west] {$s=\zeta$};
\draw (4.26,4.36) node[anchor=north west] {$v$};
\draw (1.26,1.38) node[anchor=north west] {$u=\eta$};
\draw [->] (-0.58,2.28) -- (-0.62,2.92);
\draw [->] (-0.58,2.28) to [out=70] (4.,3.1);
\draw [->] (0.74,2.03) to [out=90] (1.0,0.66);
\draw [->] (0.73,2.12) to [out=70] (3.92,1.26);
\begin{scriptsize}
\draw (-0.35,2.10) node {\showclock{4}{50} \quad=};
\draw (0.50,2.05) node {\showclock{4}{50}};
\end{scriptsize}
\end{tikzpicture}
}
\caption{An illustration of the coupling dynamics.}\label{fig:syncrocoupl}
\end{figure}
Both $T^U_m$ and $T^U_M$ are $\cF_t$-stopping times, and we have $T^U_m = T^U_M$ if and only if $\zeta = \emptyset$. The event $\{\zeta=\emptyset \}$ just means that the clock associated to the pair of $(r,\,s)$ has rung.
Observe that to implement 2), the process $V_t$ must use the clocks $\xi^{(\zeta,u)}$ with $u\in(0,1)$ in place of the clocks $\xi^{(\eta,u)}$ after $T_m^U$.
We shall define a random bijection $\sigma: \Omega \rightarrow \mathscr{A}^{\mathscr{A}}$ that implements this idea of ``switching the clocks''. This allows to write a formula (see~\eqref{eq:gammaclock} below) for the noises that determine $V_t$ via the noises $(\Xi,\beta)$ that determine $U_t$.
We set $\sigma := \mathbf{id}_{\mathscr{A}}$ on the event $ \{ \zeta =\eta= \emptyset \}$. Otherwise, $\zeta$ and $\eta$ are singletons and we set
\begin{equation}\label{def:sigmahcube} \sigma(A) = \begin{cases} A & \quad \mbox{if $\zeta, \eta \notin A$ or $\zeta \cup \eta =A $ } \\ (A \setminus \zeta) \cup \eta & \quad \mbox{if $\zeta \in A, \eta \notin A $}
\\ (A \setminus \eta) \cup \zeta & \quad \mbox{if $\eta \in A, \zeta \notin A$ }
\end{cases},
\end{equation}
where we used the aforementioned convention of understanding $A\in\mathscr{A}$ as a subset of $(0,1)$ with two elements, and $\zeta$ and $\eta$ as elements of $(0,1)$.
Note that $\sigma$ is $\mathcal{F}_{T^U_m }$-measurable, where we precise that on $\mathscr{A}^{\mathscr{A}}$ we put the standard cylinder $\sigma$-algebra.
We define the family $\Gamma={\{ \gamma^A \}}_{A \in \mathscr{A}}$ by
\begin{equation}\label{eq:gammaclock} \gamma^{A}_t = \xi^A_t \mathbbm{1}_{\{t < T^U_m \}} +\left( \xi^{A}_{T^U_m}+ \xi^{\sigma(A), T^U_m}_{t-T^U_m}\right)
\mathbbm{1}_{\{ T^U_{m } \leq t < T^U_M \}} +\left( \xi^{A}_{T^U_m}+ \xi^{\sigma(A), T^U_m}_{T^U_M-T^U_m} + \xi^{\sigma(A), T^U_M }_{t-T^U_M} \right) \mathbbm{1}_{\{ t \geq T^U_M \}}. \end{equation}
Finally, we define the process ${(V_t)}_{t \geq 0 }$ in the same way as in~\eqref{def:Uchain} and~\eqref{def2:Uchain} by replacing $U$ with $V$ and $\Xi$ with $\Gamma$. Namely, fixing $T_0^V =0$ and $W_0 = V$, we set recursively the jump times
\begin{equation}\label{def:Vchain} T^V_{n+1} -T^V_n := \tau(\Gamma^{{[W_n]}^2,T_n^V},\beta^{T_n^V})\quad \bsA^V_{n+1}\ldef \bsA( \Gamma^{{[W_n]}^{2}, T_n^V} ,\beta^{T_n^V})
\end{equation}
and the jump chain
\[
W_{n+1} \ldef \Psi_{\bsA^V_{n+1}}W_n,\quad n\geq 0.
\]
As before, we define the continuous time process ${(V)}_{t\geq 0}$ by
\begin{equation}\label{def2:Vchain}
V_t \ldef W_{n},\quad\mbox{if}\quad T^V_n\leq t< T^V_{n+1},\, n\geq0.
\end{equation}
We have not shown yet that ${(V_t)}_{t\geq 0}$ is a $(c,\mu)$-Markov chain started in $V$. The next Lemma will be fundamental to show that the pair ${(U_t,\,V_t)}_{t \geq 0}$ is indeed a coupling. It asserts that if we construct another family $\Gamma$
by exchanging the increments of $\xi^A$ with the increments of $\xi^{\sigma(A)}$ after a certain time $T$, the distribution of $\Gamma$ is the same of $\Xi$, provided that $\sigma(\omega): \scrA \rightarrow \scrA$ is a bijection.
The proof, being rather technical but standard, is postponed to Appendix~\ref{app:clockswitch}.
\begin{lemma}\label{lem:clockswitch}
Let $T$ be a $\cF_t$-stopping time and $\sigma:\Omega\rightarrow \mathscr{A}^{\mathscr{A}}$ a random bijection which is $\cF_T$-measurable.
Define the family ${\{\rho^A \}}_{A \in \mathscr{A}}$ by (recall~\eqref{eq:xi_with_indices})
\[ \rho^A_t:= \xi^A_t \mathbbm{1}_{\{t< T\}} + ( \xi^A_T + \xi^{\sigma(A),T}_{t-T} ) \mathbbm{1}_{\{t \geq T\}}. \]
Then ${\{ \rho^A \}}_{A \in \mathscr{A}}$ is distributed as $\Xi$.
\end{lemma}
\begin{prop}\label{lem:hcubecoupling}
The pair ${(U_t,\,V_t)}_{t \geq 0}$ is a coupling of $\bbP^U$ and $\bbP^V$.
\end{prop}
\begin{proof} By definition $\bbP^U$ is the law of ${(U_t)}_{t\geq0}$ on $\bbD(\bbR_+,\,\mathscr{U})$.
Therefore, the only thing to show is that $\bbP^V$ is the law of ${(V_t)}_{t\geq0}$ on $\bbD(\bbR_+,\,\mathscr{U})$.
For that it is enough to prove that $\Gamma = \Xi$ in distribution, since ${(V_t)}_{t \geq 0}$ is constructed as ${(U_t)}_{t \geq 0}$ by simply replacing the driving noise $\Xi$ with $\Gamma$ and $U$ with $V$.
An application of Lemma~\ref{lem:clockswitch} for $T = T^U_{m}$ and $\sigma$ as in~\eqref{def:sigmahcube} tells that $ \Theta\ldef {\{ \theta^A \}}_{A \in \mathscr{A}} = \Xi$
in distribution, where
\[
\theta^A_t \ldef \xi^A_t \mathbbm{1}_{\{t<T^U_{ m}\}} + \left(\xi^{A}_{T^U_m} + \xi^{\sigma(A),T^U_m}_{t-T_m^U} \right) \mathbbm{1}_{\{t \geq T^U_m \}}.
\]
Applying again Lemma~\ref{lem:clockswitch} for $T := T^U_M$ and $\sigma$ as in~\eqref{def:sigmahcube} we obtain that the process $ \overline{\Theta} \ldef {\{ \overline{\theta}^A \}}_{A \in \mathscr{A}} = \Theta$ in distribution, where
\[
\overline{\theta}^A_t \ldef \theta^A_t \mathbbm{1}_{\{t<T^U_{ M}\}} + \left(\theta^{A}_{T^U_M} + \theta^{\sigma(A),T^U_M}_{t-T_M^U}\right) \mathbbm{1}_{\{t \geq T^U_M \}}
\]
(this is~\eqref{eq:gammaclock}). Using the fact that $\sigma (\sigma(A)) =A$ for all $A \in \mathscr{A}$, it is possible to see that $\overline{\Theta} = \Gamma$, from which the conclusion follows.
\end{proof}
Let us collect below some properties of the coupling $(U_t,\,V_t)$ defined above which follow readily from the construction. From now on, since there is no ambiguity, we write $T_m$ and $T_M$ instead of $T^U_m$, $T^U_M$.
\begin{enumerate}[leftmargin=*,label= ({\roman*})]
\item\label{item:point1}
For $t<T_{m}$ we have $U_t = \Psi_{r,s}V_t = V_t \cup \{r,s \}$ and $\{r,s\} \cap V_t = \emptyset$.
\item\label{item:point2}
For any $T_m \leq t < T_M $ we have
$U_t = (V_t \setminus \eta) \cup \zeta$.
\item\label{item:point3}
For any $t \geq T_M$ we have $U_t = V_t$.
\item\label{item:point4}
In particular, for all $t\geq 0$
\begin{equation}\label{eq:bound_pre_mean_dist}
d(U_t,V_t) = \mathbbm{1}_{\{t <T_m\}} + 2 \mathbbm{1}_{\{T_m \leq t < T_M\}}.
\end{equation}
\end{enumerate}
We finally come to the proof of Proposition~\ref{prop:lip}, of formula~\eqref{eq:sol} as well as to the proof that $P^{00}$ is the unique invariant distribution of the Markov chain ${(U_t)}_{t\geq 0}$. We start by proving Proposition~\ref{prop:lip}.
\begin{proof}[Proof of Proposition~\ref{prop:lip}]
The first step is to prove that for all $t\ge 0$
\begin{equation}\label{eq:bound_mean_dist}
\bbE[d(U_t,V_t)] \leq 4\exp(-t/2)+\exp(-t),
\end{equation}
for which we shall use~\eqref{eq:bound_pre_mean_dist}. We bound $\bbP[t <T_m]$ by $\bbP[\xi^{(r,\,s)}_t=0]\leq \exp(-t)$,
since $\xi^{(r,\,s)}$ is a Poisson process with rate $1$.
The second summand of~\eqref{eq:bound_pre_mean_dist} will give a contribution of $4 \exp(-t/2)$, using that
\[
\{T_m\le t<T_M\}\subseteq \left \{T_m\ge\frac{t}{2} \right \}\cup\left \{T_M-T_m\ge\frac{t}{2} \right \}.
\]
Note that the second event implies that the coupling has not been successful within time $t/2$ from $T_m$, meaning no one of the clocks $\{\xi^A:\, A\in {[U_{t'}]}^2,\,\zeta\in A,\,t'\in[T_m,\,T_m+t/2)\}$ has rung yet.
The proof of~\eqref{eq:bound_mean_dist} is now complete.
Let us now show~\eqref{eq:bound}. Let $W_1,\,W_2\in \mathscr{U}$ and assume first $d(W_1,W_2) = 1$. If we denote by ${(W_{i,t})}_{t\geq 0}$ the process with law $\bbP^{W_i}$, $i=1,2$, then by~\eqref{eq:bound_mean_dist} and the Lipschitz continuity
\begin{align*}
\left|S_t f(W_1)-S_t f(W_2)\right| &\leq \left|\bbE[f(W_{1,t})- f(W_{2,t})]\right| \\
&\leq \bbE[d(W_{1,t},W_{2,t})] \leq 4\exp(-t/2)+\exp(-t).
\end{align*}
In the case when $d(W_1,W_2)>1$, it suffices to consider a path of length $d(W_1,W_2)$ from $W_1$ to $W_2$ and use the triangular inequality.
\end{proof}
We now show that the Stein equation $\mathscr{L}g=f$ admits a solution for all $f\in\mathrm{Lip}_1(\mathscr{U})$ with $E_{P^{00}}[\mathscr{U}] = 0$ given by the formula~\eqref{eq:sol}. This follows from convergence-to-equilibrium estimates included in the next Proposition.
For any probability measure $\nu\in \cP(\mathscr{U})$, recall that we denote by $\nu_\# S_t$ the measure determined by $\nu_\# S_t(A) \ldef E_\nu[S_t \mathbbm{1}_A]$.
\begin{prop}\label{prop:SteinSol} Let $\mu,\,\nu\in \mathscr{P}(\mathscr{U})$ be probability measures. Then
\begin{equation}\label{eq:conveqhyp}
d_{W,1}(\nu_\# S_t,\mu_\# S_t) \leq (4\exp(-t/2)+\exp(-t)) d_{W,1}(\mu,\nu).
\end{equation}
In particular, $P^{00}$ is the only invariant distribution of $S_t$. Furthermore, for any $f\in \mathrm{Lip}_1(\mathscr{U})$ such that $E_{P^{00}}[f]=0$ the function
\[
g(U)\ldef -\int_0^\infty S_t f(U)\De t,\quad U\in \mathscr{U},
\]
solves $\mathscr{L}g(U)=f(U)$ for all $U\in \mathscr{U}$. Moreover $g$ is a $9$-Lipschitz function.
\end{prop}
\begin{proof} By definition of $1$-Wasserstein distance
\begin{align*}
d_{W,1}(\nu_\# & S_t,\mu_\# S_t) = \sup_{f\in\mathrm{Lip}_1(\mathscr{U})}\left|\int S_t f \,\De\nu-\int S_t f \,\De\mu\right| \\
&\leq (4\exp(-t/2)+\exp(-t)) \sup_{g\in\mathrm{Lip}_1(\mathscr{U})}\left|\int g \,\De\nu-\int g \,\De\mu\right| \\
& = (4\exp(-t/2)+\exp(-t)) d_{W,1}(\nu,\mu),
\end{align*}
where we used that $S_t f$ is Lipschitz with constant $ 4\exp(-t/2)+\exp(-t)$. The uniqueness of the invariant distribution is obvious from~\eqref{eq:conveqhyp}.
The fact that $g$ solves $\mathscr{L}g=f$ for $f\in\mathrm{Lip}_1(\mathscr{U})$, $E_{P^{00}}[f]=0$ is a simple consequence of four steps: passing to the limit in the equality
\[
f(U) - S_u f(U) = -\int_0^u \mathscr{L}S_t f(U)\,\De t,
\]
using Fubini's Theorem, the particular form of $\mathscr{L}$ and
\[
|S_u f(U)|=|S_u f(U)- E_{P^{00}}[f]| \leq (4\exp(-t/2)+\exp(-t)) d_{W,1}(\delta_U,P^{00}),
\]
which holds thanks to~\eqref{eq:conveqhyp}. The Lipschitz constant of $g$ is obtained also from Proposition~\ref{prop:lip}.
\end{proof}
\subsection{The continuous time random walk on \texorpdfstring{$\Z$}{}}\label{subsec:CTRW}
\subsubsection{Setting and notation}\label{subsubsec:notCTRW}
In this Section we discuss how the ideas for the random walk on the hypercube can be transported to the case of a continuous time random walk on $\bbZ$ (and more generally on $\bbZ^d$, see Corollary~\ref{cor:boundWasCTRW}).
We state the results without detailed proofs, as everything can be done by repeating almost word by word the arguments of the previous section.
We assume that the walker starts in $0$, jumps up by one at rate $j_+$ and down by one at rate $j_-$. We denote by $P^0$ the law on $\bbD([0,1];\bbZ)$ of such walk up to time $T:=1$. The bridge of the random walk from and to the origin is given by
\[
P^{00}(\cdot)\ldef P^0(\cdot|X_0=0,\,X_1=0)
\]
and supported on the space of piecewise constant c\`adl\`ag paths with initial and terminal point at the origin and jumps of sizes $\pm 1$, which we denote by $\Pi([0,1]; \bbZ)$. Let
\begin{equation*}
\mathscr{V}\ldef \Big \{U= (U^+, U^-) :\, |U^+|,\,|U^-|<\infty \text{ and }\, U^+\times U^- \subset {(0,1)}^2\setminus\Delta\Big \}
\end{equation*}
where $\Delta \ldef \{ (u,u),\;u\in(0,1)\}$.
We consider the map $\bbU: \bbD([0,1]; \bbZ)\to \mathscr{V}$ that to each path $X\in\bbD([0,1]; \bbZ)$ associates $(U^+,\, U^-)$ where $U^+ \subset (0,1)$ is the set of times of positive jumps of $X$ and
$U^-\subset (0,1)$ is the set of times of negative jumps of $X$.
As for the case of the hypercube, it will be convenient to characterize $P^{00}$ as a measure on the set of jump times. We observe that $\Pi([0,1]; \bbZ)$ is in bijection with
\begin{equation*}
\mathscr{U}\ldef \Big \{U= (U^+, U^-)\in \mathscr{V} :\, |U^+|=|U^-|\Big \}.
\end{equation*}
The bijection is given by the restriction of $\bbU$ to $\Pi([0,1]; \bbZ)$, we denote by $\bbX : \mathscr{U}\to \Pi([0,1]; \bbZ)$ the inverse. We endow $\mathscr{U}$ with the $\sigma$-algebra $\cU$ of sets $A$ such that $\bbU^{-1}(A)$ belongs to the Borel $\sigma$-algebra of $\bbD([0,1]; \bbZ)$.
The perturbations that we choose to characterize $P^{00}$ are those preserving the ``parity'' of the path, meaning that they add or remove simultaneously a positive and negative jump. More precisely we redefine $\mathscr A:={(0,1)}^2\setminus\Delta $ (and from now on, this notation will be assumed throughout the rest of the Section) and for $(r,s)\in \mathscr A$
we define $\Psi_{r,s} : \mathscr{U}\to \mathscr{U}$ by
\[
\Psi_{r,s}U = \Psi_{r,s}(U^+,U^-)\ldef
\begin{cases}
(U^+\cup \{r\},\,U^-\cup \{s\}) & \mbox{ if } r\notin U^+,\,s\notin U^- ,\\
(U^+\setminus \{r\},\,U^-\setminus \{s\}) & \mbox{ if } r\in U^+,\,s\in U^- , \\
U & \mbox{ otherwise. }
\end{cases}
\]
We endow $\mathscr{U}$ with the graph structure induced by the maps $\{\Psi_{r,s}:\,(r,s)\in \mathscr A\}$. That is, we say that $U,\,V\in \mathscr{U}$ are neighbors if there is $(r,s)\in{(0,1)}^2\setminus\Delta $ such that $U=\Psi_{r,s}V$,
see Figure~\ref{fig:coup1RWZ} for an example of two neirest-neighbor paths. We put on $\mathscr{U}$ the graph distance $d:\mathscr{U}\times \mathscr{U}\to\bbN$. Observe that $\mathscr{U}$ is connected, since any point $U\in \mathscr{U}$ has distance $|U^+|$ to $\mathbf{0}\ldef (\emptyset,\emptyset)$.
\begin{figure}[ht!]
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.42,-1.02) rectangle (5.94,5.32);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U$};
\draw (5.54,0.62) node[anchor=north west] {$V$};
\draw [line width=1.2pt,color=ffqqtt] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt,color=ffqqtt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt,color=ffqqtt] (4.,2.)-- (4.,0.);
\draw (-2.48,4.62) node[anchor=north west] {$r$};
\draw (-0.42,4.62) node[anchor=north west] {$s$};
\end{tikzpicture}
\caption{ $U=\Psi_{r,\,s}V$. The red segment is a $+1$ jump and the black a $-1$ jump.}\label{fig:coup1RWZ}
\end{figure}
\begin{prop}\label{prop:genRWonZ}
$P^{00}$ is the only invariant measure of a Markov process ${\{ U_t \}}_{t\geq 0}$ on $\mathscr{U}$ with generator
\begin{equation}\label{eq:genRWonZ}
\mathscr{L} f(U)\ldef j_+ j_- \int_\mathscr{A} (f(\Psi_{r,s} U)- f(U)) \De r\De s + \sum_{(r,s)\in U^+\times U^-} (f(\Psi_{r,s} U)- f(U))
\end{equation}
for all $f : \mathscr{U}\to \bbR$ bounded and measurable.
\end{prop}
\begin{proof} As in the hypercube example (proof of Prop.~\ref{prop:hypergen}) we want to show $E_{P^{00}}[\mathscr L f]=0$ for all functions $f$ bounded and measurable.
Again we rely on~\citet[Example 28]{CR}, who give the following integration-by-parts characterization of the bridge measure $P^{00}$ on $\bbD([0,1];\bbZ)$: for all bounded and measurable $F:\bbD([0,1];\bbZ)\times(0,1)\times(0,1)\to\R$
\begin{align}
{j_+j_-}E_{P^{00}}&\left[\sum_{(t_1,\,t_2)\in \bbU{(X)}^+\times\bbU{(X)}^- }F(X,\,t_1,\,t_2)\right]\nonumber \\
&=\int_{{[0,\,1]}^2}E_{P^{00}}\left[F\left(X+\mathbbm{1}_{[t_1,\,1]}-\mathbbm{1}_{[t_2,\,1]},\,t_1,\,t_2\right)\right]\De t_1\De t_2 \label{eq:RWIBP}
\end{align}
where $(t_1,\,t_2) \in \bbU{(X)}^+\times\bbU{(X)}^-$ means that $X_{t_1^-}+1=X_{t_1},\,X_{t_2^-}-1=X_{t_2}$ (the reader can compare the notation with the proof of Prop.~\ref{prop:hypergen}).
Again we can consider functionals $F(X,\,t_1,\,t_2)$ of the form $G(\bbU(X),\,t_1,\,t_2)$ with $G:\mathscr{U}\times(0,1)\times(0,1)\to \bbR$, and note that
$\bbU(X+\mathbbm{1}_{[r,\,1]}-\mathbbm{1}_{[s,\,1]})=\Psi_{r,\,s}U$ almost everywhere in $r$ and $s$.
Taking $G$ to be a difference, we can conclude in the same way as in Prop.~\ref{prop:hypergen} that $P^{00}$ is indeed invariant for $\mathscr{L}$ (the proof of uniqueness will follow as a consequence to Prop.~\ref{prop:lipCRWZ} as we shall see below).
\end{proof}
Below we will rapidly discuss how to construct for any $U\in\mathscr{U}$ a continuous time Markov chain ${\{ U_t \}}_{t\geq 0}$ on $\mathscr{U}$ with generator $\mathscr{L}$ and started from $U$.
We will denote by $\bbP^U$ the law of such process on $\bbD(\bbR_+;\mathscr{U})$, by $\bbE^U$ the corresponding expectation and by
\[
S_t f(U)\ldef \bbE^U[f(U_t)],\quad f:\mathscr{U}\to \bbR
\]
its semigroup. The construction of $U_t$ via Poisson processes will be quite convenient in showing the following key proposition.
\begin{prop}\label{prop:lipCRWZ}
For any $f\in \mathrm{Lip}_1(\mathscr{U})$ with $E_{P^{00}}[f]=0$, any $U,V\in\mathscr{U}$ and all $t\geq 0$
\begin{equation}\label{eq:lipCTRWZ}
|S_t f(U) - S_t f(V)| \leq (4 \exp(-t/2)+\exp(-t)) d(U,V).
\end{equation}
\end{prop}
Proposition~\ref{prop:lipCRWZ} can be proved via a coupling argument. As in the preceding Section, with a few changes, we construct two processes ${(U_t)}_{t\geq 0}$ and ${(V_t)}_{t\geq 0}$
with generator $\mathscr{L}$ and starting from neighbouring points $U, V\in\mathscr{U}$
in such a way that they are at most at distance two and coalesce in an exponential time.
We will provide few details in the next paragraph.
The consequences of Proposition~\ref{prop:lipCRWZ} are the same as those of the preceding section.
In fact, using~\eqref{eq:lipCTRWZ} and the same argument as in Proposition~\ref{prop:SteinSol}, we can prove that for any $f\in \mathrm{Lip}_1(\mathscr{U})$ such that $E_{P^{00}}[f]=0$
\begin{equation}\label{eq:sol_CTRWZ}
g(U)\ldef - \int_0^\infty S_t f(U)\De t,
\end{equation}
is well-defined and solves $\mathscr{L}g =f$. This allows to obtain the following bound in the Wasserstein distance on $(\mathscr{U},d)$ for bridges of random walks on $\bbZ$ with spatially homogeneous jump rates.
\begin{prop}\label{prop:boundWasCTRW}
Let $P^{00}$, $Q^{00}$ be the laws of two continuous-time random walk bridges on $[0,\,1]$ with jump rates $j_+,\,j_-$ and $h_+,\,h_-$ respectively. Then
\[
d_{W,\,1}(P^{00},\,Q^{00})\le 9\left|j_+j_- -h_+h_-\right|.
\]
\end{prop}
\begin{proof}
Given Proposition~\ref{prop:lipCRWZ}, the proof is analogous to the hypercube case. The difference in a factor two in the constant comes from the fact that we are integrating over ${(0,1)}^2\setminus\Delta$ rather than $\{(u,v)\in{(0,1)}^2:\,u<v\}$.
This is better explained by saying that in the hypercube case jumping up or down is the same thing.
\end{proof}
The same argument as in Corollary~\ref{cor:higher_dim} leads to a bound for the distance between bridges of random walks on $\bbZ^d$. We will omit the proof.
\begin{cor}\label{cor:boundWasCTRW}
Let $d\ge 2$ and let $P^{0,\,d}$ and $Q^{0,\,d}$ be the laws of two bridges of random walks on $\Z^d$ with jump rates $j^{(i)}_+,\,j^{(i)}_-$ resp. $h^{(i)}_+,\,h^{(i)}_-$ in the $i$-th coordinate. Then,
\[
d_{W,\,1}(P^{0,\,d},\,Q^{0,\,d})\le 9\sum_{i=1}^d\left|j^{(i)}_+ j^{(i)}_- - h^{(i)}_+ h^{(i)}_-\right|.
\]
\end{cor}
\begin{remark}
Once again, we wish to stress that the bound in Proposition~\ref{prop:boundWasCTRW} (resp.\ Corollary~\ref{cor:boundWasCTRW}) is compatible with what is known about conditional equivalence for bridges of random walks on $\bbZ$ (resp.\ $\bbZ^d$) with spatially homogeneous jump rates. Indeed, two random walk share their bridges if and only if $j^+j^-=h^+h^-$.
\end{remark}
\paragraph{Coupling construction}
The construction of a Markov process with generator $\mathscr{L}$ can be performed similarly to the previous section defining on a common probability space $(\Omega,\cF,\bbP)$ a family of independent identically distributed Poisson processes $\Xi\ldef{\{ \xi^A\}}_{A\in\mathscr{A}}$
with rate one and a Poisson random measure $\beta$ on $\bbR_+\times \mathscr{A}$ with intensity $j_+ j_-\lambda\otimes\lambda_\mathscr{A}$, where $\lambda$ is the Lebesgue measure on $\bbR_+$ and $\lambda_\mathscr{A}$ is the Lebesgue measure on $\mathscr{A}$.
Using the noises $(\Xi,\beta)$, it is now straightforward to construct inductively a continuous-time Markov chain ${(U_t)}_{t\geq 0}$ started in $U\in\mathscr{U}$ by sampling the interarrival times as in~\eqref{def:Uchain} and~\eqref{def2:Uchain}.
In words, the dynamics follows a birth-and-death mechanism. Birth occurs after an exponentially distributed time of rate $j_+j_-$, when a pair positive-negative jump $(r,s)$ is sampled uniformly from $\mathscr{A}$, $r$ is added to $U^+_t$, and $s$ is added to $U^-_t$.
Death occurs at rate $U^+_t U^-_t $ when a pair $(r,s)$ is sampled uniformly from $U^+_t \times U^-_t$, $r$ is removed form $U^+_t$ and $s$ from $U^-_s$.
It follows from the same argument of Lemma~\ref{lem:genident} that ${(U_t)}_{t\geq 0}$ has generator $\mathscr{L}$. We denote by $\bbP^U$ its law on $\bbD(\bbR_+;\mathscr{U})$, by $\bbE^U$ the corresponding expectation and by ${(S_t)}_{t\geq 0}$ its semigroup.
We will describe in words the coupling construction which is based on the one for the hypercube of Subsubsection~\ref{subsec:coupling_hyper}. To simplify the exposition, by ``adding (removing) $(u,v)$ to (from) $U_{T^U_k}$'' we mean that $u$ is added to (removed from) $U^+_{T^U_k}$ and $v$ is added to (removed from) $U^-_{T^U_k}$.
Following closely the notation used for the hypercube, we consider \ the times $\{T^U_k:\,k\in \bbN \}$, representing the jump times of the chain ${(U_t)}_{t\geq 0}$ and the sequence $\{ \mathbf{A}_k^U:\,k\in \bbN \}$, representing the pair in $\mathscr{A}$ which is either added or removed from the chain at time $T^U_k$.
Let us begin by fixing $U,\,V\in \mathscr{U}$ such that $U=\Psi_{r,s}V$ with $r\notin V^+$ and $s\notin V^-$ as in Figure~\ref{fig:coup1RWZ}. We want to construct a coupling ${(U_t,V_t)}_{t\geq 0}$ of $\bbP^U$ and $\bbP^V$. Our coupling works algorithmically as follows: we start at time $k=0$.
For all $k$ such that both $r\in U^+_{T^U_k}$ and $s\in U^-_{T^U_k}$
\begin{enumerate}[label=\arabic*)]
\item add(remove) simultaneously the same points to(from) $V^+_{T^U_k},V^-_{T^U_k}$ that are added to(removed from) $U^+_{T^U_k},U^-_{T^U_k}$. In other words we use the clocks $\{\xi^{(u,v)},\, (u,v)\in U^+_{T^U_k} \times U^-_{T^U_k}\}$ to remove $r$ from $V^+_{T^U_k}$ and $s$ from $V^{-}_{T^U_k}$ and the process $\beta$ in order to add new pairs.
\end{enumerate}
Let $m\ldef \inf \{k:\,\text{either }r\notin U^+_{T^U_k}\text{ or }s\notin U^-_{T^U_k}\}$ so that, as before,
\[
T_m^U \ldef \inf \{t>0:\; \mbox{either $r\notin U_t^+$ or $s\notin U_t^-$}\},
\]
and the pair $\mathbf{A}_m^U$ is removed at time $T_m^U$ is of the form $(r,s)$ or $(r,w)$ or $(w,s)$ for some $w \neq r,\,s$.
For the sake of example, say that $\mathbf{A}_m^U=(r,\,w),\,w\neq s$, is the pair that is removed at time $T^U_m$. Then
\begin{enumerate}
\item[2)] $\mathbf{A}_m^U$ is removed from $U_{T_m^U}$ and nothing happens in $V_{T_m^U}$.
Set $\zeta$ to be the point between $r$ and $s$ which is not removed (in our example $\zeta:=s)$ and $\eta$ the point that is neither $r$ or $s$ and that is removed (in our example $\eta:=w$) (see Figure~\ref{fig:syncrocouplRWZ}).
\item[3)] For $t>T_m^U$ repeat 1) with the difference that for the dynamic ${(V_t)}_{t\geq 0}$ we replace each clock $\xi^{(u,\,\eta)}$ with the clock $\xi^{(u,\,\zeta)}$ for any $u\in (0,\,1)$.
\end{enumerate}
The algorithm is built in such a way that at the first instant $T_M^U>T_m^U$ when a Poisson clock involving $\zeta$ rings the two dynamics will coincide and continue together almost surely.
By construction, we have that almost surely for all $t\geq 0$
\begin{equation}\label{eq:distance_coupling}
d(U_t,V_t) = \mathbbm{1}_{\{t<T_m^U\}}+2 \mathbbm{1}_{\{T^U_m\leq t<T_M^U\}},
\end{equation}
which leads immediately to the following.
\begin{proof}[Proof of Proposition~\ref{prop:lipCRWZ}] As a first step one uses~\eqref{eq:distance_coupling} to show that for all $U,\,V\in \mathscr{U}$ such that $d(U,V)=1$ and all $t\geq 0$
\[
\bbE[d(U_t,V_t)] \leq (4 \exp(-t/2)+\exp(-t)).
\]
From here,~\eqref{eq:lipCTRWZ} is derived in the same way as for the hypercube case.
\end{proof}
\begin{figure}[!ht]
\centering\subfloat[Clock of $(r,\,w)$ rings. Here $\zeta=s$, $\eta=w$.]{
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-0.58) rectangle (6.64,5.92);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U_{T^U_m}$};
\draw (5.54,0.62) node[anchor=north west] {$V_{T^U_m}$};
\draw [line width=1.2pt,color=ffqqtt] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt,color=ffqqtt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt,color=ffqqtt] (4.,2.)-- (4.,0.);
\draw (-2.48,4.62) node[anchor=north west] {$r$};
\draw (-0.48,4.62) node[anchor=north west] {$s=\zeta$};
\draw (1.26,4.62) node[anchor=north west] {$w=\eta$};
\draw (4.26,4.62) node[anchor=north west] {$q$};
\draw (-4.2,5.5) node[anchor=north west] {\showclock{3}{00}};
\draw [shift={(-2.26,4.98)}] plot[domain=2.9812172096138427:4.769469762790954,variable=\t]({1.*1.377679207943562*cos(\t r)+0.*1.377679207943562*sin(\t r)},{0.*1.377679207943562*cos(\t r)+1.*1.377679207943562*sin(\t r)});
\draw [->] (-2.2796791235706597,3.6024613500538396) -- (-2.1,3.64);
\draw [->] (-3.62,5.2) to [out=70] (0.8,4);
\end{tikzpicture}
}
\quad
\subfloat[The clocks of $(q,\,s)$ in $U$ and $(q,\,w)$ in $V$ are synchronised after $(r,\,w)$ dies in $U$.]{
\definecolor{qqqqff}{rgb}{0.,0.,1.}
\definecolor{ffqqtt}{rgb}{1.,0.,0.2}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-0.58) rectangle (6.64,5.92);
\draw [line width=1.2pt,color=ffqqtt,dashed] (-2.,5.)-- (-2.,3.);
\draw [line width=1.2pt,dashed] (1.,5.)-- (1.,3.);
\draw [line width=1.2pt] (-4.,3.)-- (5.,3.);
\draw [line width=1.2pt] (-4.,0.)-- (5.,0.);
\draw (5.52,3.5) node[anchor=north west] {$U_{T^U_M}$};
\draw (5.54,0.62) node[anchor=north west] {$V_{T^U_M}$};
\draw [line width=1.2pt] (-0.62,5.02)-- (-0.62,3.02);
\draw [line width=1.2pt] (1.,2.)-- (1.,0.);
\draw [line width=1.2pt,color=ffqqtt] (4.,5.)-- (4.,3.);
\draw [line width=1.2pt,color=ffqqtt] (4.,2.)-- (4.,0.);
\draw (-0.45,4.36) node[anchor=north west] {$s$};
\draw (4.26,4.36) node[anchor=north west] {$q$};
\draw (1.26,1.38) node[anchor=north west] {$w$};
\draw [->] (-0.58,2.28) -- (-0.62,2.92);
\draw [->] (-0.58,2.28) to [out=70] (4.,3.1);
\draw [->] (0.74,2.03) to [out=90] (1.0,0.66);
\draw [->] (0.73,2.12) to [out=70] (3.92,1.26);
\begin{scriptsize}
\draw (-0.35,2.10) node {\showclock{4}{50} \quad=};
\draw (0.50,2.05) node {\showclock{4}{50}};
\end{scriptsize}
\end{tikzpicture}
}
\caption{An illustration of the coupling dynamics.}\label{fig:syncrocouplRWZ}
\end{figure}
Mimicking the proof of Proposition~\ref{prop:SteinSol} we can now state the following consequence of Proposition~\ref{prop:lipCRWZ}:
\begin{prop} Let $\mu,\nu\in \mathscr{P}(\mathscr{U})$ be probability measures. Then,
\begin{equation}\label{eq:conveq}
d_{W,1}(\nu_\# S_t,\mu_\# S_t) \leq (4 \exp(-t/2)+\exp(-t)) d_{W,1}(\mu,\nu).
\end{equation}
In particular, $P^{00}$ is the only invariant distribution of $S_t$. Furthermore, for any $f\in \mathrm{Lip}_1(\mathscr{U})$ such that $E_{P^{00}}[f]=0$
\[
g(U)\ldef -\int_0^\infty S_t f(U)\De t,\quad U\in\mathscr{U}
\]
solves $\mathscr{L}g(U)=f(U)$ for all $U\in\mathscr{U}$ and
\begin{equation}\label{eq:lipsol}
|g(U)-g(V)|\leq 9 d(U,V),\quad \forall U,\,V\in\mathscr{U}.
\end{equation}
\end{prop}
\subsection{Non-homogeneous continuous-time random walks}\label{subsec:revCTRW}
In this Subsection we consider continuous time random walks bridges on the integers with possibly non-homogeneous jump rates. Generalizations to higher dimensions can be obtained in the same fashion as Corollary~\ref{cor:higher_dim}.
Recall that $P^{00}$ of Subsection~\ref{subsec:CTRW} is the law of a random walk bridge with homogeneous jump rates $j_+$, $j_-$. To simplify matters, we fix the rates as $j_-=j_+:=1$.
We will use the same setting of subsection~\ref{subsubsec:notCTRW}.
In particular we will consider the set $\mathscr U$ of jump times that uniquely identifies a bridge, the map $\bbU$ that associates to a bridge $X$ its jump times $\bbU(X)=(\bbU{(X)}^+, \bbU{(X)}^-)$ and its inverse $\bbX$ which allows to reconstruct the path from the jump times.
We will often regard measures on the path space as measures on the set $\mathscr U$ via the pushforward $\bbU$.
Define on $\mathbb D([0,\,1];\,\Z)$ the law $\mathbf P$ of a random walk $X$ on $\Z$ with infinitesimal generator
\begin{equation}\label{eq:genRWnonhom}
\mathcal G f(j):=a(j)(f(j+1)-f(j))+b(j)(f(j-1)-f(j)),\;j\in \Z
\end{equation}
where $f:\Z\to\R$ has bounded support and $a,\,b:\,\Z\to(0,\,+\infty)$ are the jump rates.
Let us write $\bfP^{00}$ for the bridge of $X$ conditioned to be at $0$ at time $1$:
\[
\bfP^{00}\left(X\in \cdot\right):=\bfP\left(X\in \cdot|\,X_0=0,\,X_1=0\right).
\]
We want to get bound in the Wasserstein distance on $(\mathscr{U},d)$ between $P^{00}$ and $\bfP^{00}$.
For that we shall implement the strategy presented in Remark~\ref{rem:perturbation_operator}. The first step is to identify an operator which admits $\bfP^{00}$ as invariant measure. This is achieved using the observation~\ref{A)}.
Define for $X\in \bbD([0,1];\bbZ)$
\begin{equation}\label{eq:RNrandomwalks}
M(X) \ldef \exp\left(-\int_0^1 \Xi(X_{t-}) \De t\right)\prod_{t\in \bbU{(X)}^+}a(X_{t-})\prod_{s\in \bbU{(X)}^-}b(X_{s-}),
\end{equation}
where $\Xi(j) := a(j)+b(j)$ is the total jump rate at $j\in \bbZ$.
Then, we have the following Proposition.
\begin{prop}\label{prop:SteinRevCTRW}
$\bfP^{00}$ is an invariant law for the generator $\mathscr G$ on $\mathscr U$ defined by
\begin{align}\label{eq:genRN}
\mathscr G f(U) :=\int_{{[0,\,1]}^2}\left(f(\Psi_{u,\,v}U)-f(U)\right) &\frac{M(\bbX(\Psi_{u,v}U))}{M(\bbX(U))} \De u \De v \nonumber \\ &+\sum_{u\in U^+,v\in U^-}\left(f(\Psi_{u,\,v}U)-f(U)\right)
\end{align}
for any $f:\,\mathscr U\to\R$ bounded and measurable.
\end{prop}
\begin{remark}
Note that we do not need to know that $\bfP^{00}$ is the unique law satisfying the above Proposition. We will only use for our purposes that $\bfP^{00}$ is one such law.
\end{remark}
\begin{remark}
It is possible to extend the proposition above and the considerations that follow to jump rates $a(t,j),\,b(t,j)$, $j\in \bbZ$, that also depend on time. For that it suffices to identify the suitable change of measure $\De \bfP / \De P$, which in fact is available in~\cite{CL15}.
\end{remark}
\begin{proof}
The main idea of this proof is the following: we begin by working on the path space, where IBP formulas are available, and in the end we will transfer the results to the set $\mathscr U$,
finally proving that $E_{\bfP^{00}}[\mathscr Gf]=0$ for all $f:\mathscr U\times [0,1]\times [0,\,1]\to\R$ bounded and measurable. We begin by noticing that Girsanov's formula (cf.~\citet[Section 3, Eq. (13)]{CL15}) yields
\begin{align*}
\frac{\De \bP^{00}}{\De P^{00}}(X)&\propto
\exp\left(
\sum_{t:\,X_{t_-}\neq X_t} \log\left(a(X_{t_-})\mathbbm{1}_{\{t\in \bbU{(X)}^+\}}+b(X_{t_-})\mathbbm{1}_{\{t\in \bbU{(X)}^-\}}\right)\right.\\
&
\left.-\int_0^1 (a(X_{t-})+b(X_{t-})) \,\De t\right)\\
&=\exp\left(-\int_0^1 (a(X_{t-})+b(X_{t-})) \De t\right)\prod_{t\in \bbU{(X)}^+}a(X_{t-})\prod_{s\in \bbU{(X)}^-}b(X_{s-}) = M(X).
\end{align*}
Consider now~\eqref{eq:RWIBP} for a random walk bridge with unit jump rates. Take as test function
\[
F(X,\,u,\,v)M(X)
\]
where $F$ is any bounded and measurable function.
By multiplying and dividing the left-hand side by the Radon-Nikodym derivative~\eqref{eq:RNrandomwalks} we obtain
\begin{align*}
E_{\bfP^{00}} &\left[ \int_{{[0,\,1]}^2}F(X+\mathbbm{1}_{[u,\,1]}-\mathbbm{1}_{[v,\,1]},\,u,\,v)\frac{M(X+\mathbbm{1}_{[u,\,1]}-\mathbbm{1}_{[v,\,1]})}{M(X)}\De u \De v\right] \\
& =E_{\bfP^{00}}\left[\sum_{(u,\,v)\in \bbU{(X)}^+\times \bbU{(X)}^-}F(X,\,u,\,v)\right].
\end{align*}
As before, we now pass to the image measure, i.e.\ we choose $F(X,\,u,\,v):=G(\bbU(X),\,u,\,v)$ with $G:\mathscr{U}\times(0,1)\times(0,1)\to \bbR$. We thus obtain
\begin{align*}
E_{\bfP^{00}} \left[ \int_{{[0,\,1]}^2}G(\Psi_{u,\,v}U,\,u,\,v)\frac{M(\bbX(\Psi_{u,v}U))}{M(\bbX(U))}\De u \De v\right]
=E_{\bfP^{00}}\left[\sum_{(u,\,v)\in U^+\times U^-}G(U,\,u,\,v)\right].
\end{align*}
The conclusion follows by choosing
\[
G(U,\,u,\,v):=f(U)-f(\Psi_{u,\,v}U).\qedhere
\]
\end{proof}
Having the generator, we can now employ the Stein-Chen method to obtain a bound in the Wasserstein distance as desired. Recall that $\Delta$ is the diagonal of ${[0,\,1]}^2$.
\begin{cor}\label{prop:finalBoundCTRWrev}
Let $P^{00}$ be as in Subsection~\ref{subsec:CTRW} with unit jump rates.
Let $\bfP^{00}$ be the law of a continuous-time random walk bridge with rates $a(\cdot),\,b(\cdot)$ as above.
If $M$ is as in~\eqref{eq:RNrandomwalks} and $\nabla_{u,\,v}\log M(U):={(M(\bbX(U)))}^{-1}(M(\bbX(\Psi_{u,\,v}U))-M(\bbX(U)))$ then
\[
d_{W,\,1}\left(\bfP^{00},\,P^{00}\right)\le 9 E_{\bfP^{00}}\left[\sup_{(u,\,v)\in {(0,\,1)}^2\setminus \Delta}\left|\nabla_{u,\,v}\log M(U)\right|\right].
\]
\end{cor}
\begin{proof} Let
\begin{equation}\label{eq:ratio}
H(U,\,u,\,v):= \frac{M(\bbX(\Psi_{u,v}U))}{M(\bbX(U))}
\end{equation}
for any $U\in \mathscr U$, $(u,\,v)\in {(0,\,1)}^2\setminus \Delta$.
We begin by observing that if $g$ solves
\[
\mathscr L g=f,\quad f\in \mathrm{Lip}_1(\mathscr U),\, \,E_{P^{00}}[f]=0,
\]
(cf.~\eqref{eq:sol_CTRWZ}) we can bound the $1$-Wasserstein distance between $\bfP^{00}$ and $P^{00}$ by computing
\begin{align}
\left|E_{\bfP^{00}}\left[\mathscr L g-\mathscr Gg\right]\right| & \leq E_{\bfP^{00}}\left[\int_{{[0,\,1]}^2}\left|g(\Psi_{u,v}U)-g(U)\right|\left|H(U,\,u,\,v)-1\right| \De u \De v\right]\nonumber \\
& \stackrel{\text{Eq.~\eqref{eq:lipsol} }}{\le}9
E_{\bfP^{00}}\left[\int_{{[0,\,1]}^2}\left|H(U,\,u,\,v)-1\right|\De u \De v\right].\label{eq:yetanotherbound}
\end{align}
To conclude, observe that
\[
H(U,\,u,\,v)-1=\nabla_{u,\,v}\log M(U).\qedhere
\]
\end{proof}
\begin{remark}[Bridges and reciprocal characteristics]
The bound in Corollary~\ref{prop:finalBoundCTRWrev} is compatible with the results of~\cite{CL15} on conditional equivalence for bridges, in the sense that we show that two random walks with the same bridges on $\bbZ$ satisfy the same estimate.
In fact ~\citet[Theorem 2.4]{CL15} show that two random walks have the same bridges if and only if the quantities $a(i)b(i+1)$ and $\Xi(i+1)-\Xi(i)$ coincide for all $i\in \bbZ$.
The functions $i\mapsto a(i)b(i+1)$ and $i\mapsto \Xi(i+1)-\Xi(i)$ are known in the literature under the name of \emph{reciprocal characteristics}. They naturally identify two important families of random walk bridges:
\begin{itemize}
\item the bridges of reversible random walks, when $a(i)b(i+1)$ is constant,
\item the bridges of constant speed random walks, when $\Xi(i+1)-\Xi(i)$ is zero.
\end{itemize}
\end{remark}
We will now consider these two types of bridges and will give quantitative bounds on the approximation by the bridge of the simple random walk in the $1$-Wasserstein distance.
\subsubsection{The case of continuous-time reversible random walks}
Assume that
\begin{equation}\label{eq:ass_revRW}
a(j) b(j+1) = 1\end{equation}
for all $j\in \bbZ$. In this case, $\bfP$ is the law of a reversible random walk on $\bbZ$. A reversible measure $\pi$ can be found, up to a multiplicative constant, by imposing
\[
\pi(j+1) = {\pi(j)} a{(j)}^2,\quad \forall \, j\in \bbZ.
\]
Moreover, $M$ as defined in~\eqref{eq:RNrandomwalks} takes the form
\[
M(X)\ldef \exp\left(-\int_0^1 \big(a(X_{t-})+b(X_{t-})\big) \De t \right).
\]
This is due to the fact that, since $|\bbU{(X)}^+|=|\bbU{(X)}^-|$ and $X$ is a bridge, we can define a bijection $m:\bbU{(X)}^+\to\bbU{(X)}^-$ such that $X_{m(t)}=X_{t}+1$ for all $t\in \bbU{(X)}^+$ and use $a(j) b(j+1) = 1$ to simplify. In particular,
\[
\nabla_{u,\,v}\log M(U) = \exp\left(-\int_{\min \{u,\,v\}}^{\max \{u,\,v\}}\nabla^{\mathrm{sgn}(v-u)}\Xi(\bbX{(U)}_{t-})\De t\right) -1
\]
where $\Xi(j) := a(j)+b(j)$ is the total jump rate at $j\in\bbZ$ and we have set $\nabla^\pm h(i):=h(i\pm 1)-h(i)$ for $h:\Z\to\R$.
This can be used as a starting point to get distance bounds. For example we can immediately prove the following universal bound which depends only on the speed of the random walk.
\begin{prop}
Let $\mathbf P^{00}$ be the law of the bridge of a continuous-time random walk satisfying~\eqref{eq:ass_revRW} and for which there exists $\kappa>0$ such that for all $j\in \bbZ$
\[
|\Xi(j+1)-\Xi(j)|\leq \kappa.
\]
Then
\[
d_{W,\,1}\left(\bfP^{00},\,P^{00}\right)\le 9 \left(2\cdot\frac{ e^\kappa-1-\kappa}{\kappa^2}-1\right).
\]
\end{prop}
\begin{proof}
This is a direct consequence of~\eqref{eq:yetanotherbound} and the bound
\[
\exp(-\kappa|u-v|)-1\leq \nabla_{u,\,v}\log M(U) \leq \exp(\kappa|u-v|)-1,\quad \forall\, u,\, v\in(0,1). \qedhere
\]
\end{proof}
\subsubsection{The case of continuous-time constant-speed random walk}
We would like now to provide some explicit bounds for a certain class of random walk bridges on $\Z$ whose underlying random walk measure has constant speed. Namely, also in the measure-theoretic setting of subsection~\ref{subsubsec:notCTRW}, we will consider the random walk $\bfP$ whose generator $\mathcal{G}$ is given in equation~\eqref{eq:genRWnonhom} and whose jump rates $a,\,b:\,\Z\to(0,\,+\infty)$ satisfy
\begin{equation}\label{eq:condRWvarying}
\Xi(j+1)-\Xi(j) = \kappa,\quad
\nu \le a(j)b(j+1)\le \mu,\quad\forall \,j\in \Z,
\end{equation}
where $\mu\geq \nu>0$. Notice that the walk does not need to be reversible. Its bridge will, as before, have law
\[
\bfP^{00}(X\in\cdot):=\bfP(X\in\cdot|X_0=0,\,X_1=0).
\]
Our target process will remain the same of the previous pages, that is, the random walk bridge with unit jump rates whose law is $P^{00}$; any choice of homogeneous jump rates is also possible.
In fact, the only thing that we need is that we can solve the Stein's equation for $\mathscr{L}$ associated to $P^{00}$ and provide estimates for the solution.
Generalization to higher dimensions, e.g.\ random walks on $\bbZ^d$, are also possible.
\begin{theorem} In the above setting
\begin{align*}
d_{W,\,1}(\bfP^{00},\,P^{00}) & \leq 9 \left(\mu\cdot \frac{I_0(2\mu/\sqrt{\nu})}{I_0(2\sqrt{\nu})} - \sqrt{\mu \nu} + |1-\sqrt{\mu \nu}| \right)
\end{align*}
where $I_0$ is the modified Bessel function of the first kind.
\end{theorem}
\begin{remark} Observe that with the choice $\mu = \nu$ we find back the bound as in Proposition~\ref{prop:boundWasCTRW}.
\end{remark}
\begin{proof}
Let $P^{00}_{\lambda}$ be the bridge of a random walk on $\bbZ$ with jump rates $j_+$, $j_-$ such that $j_+ j_- = \lambda$, $\lambda>0$. Then
\begin{equation}\label{eq:triangular}
d_{W,1}(\bfP^{00},P^{00}) \leq d_{W,1}(P^{00}_\lambda,P^{00}) + d_{W,1}(\bfP^{00},P_\lambda^{00}).
\end{equation}
Clearly, by Proposition~\ref{prop:boundWasCTRW} we have $d_{W,1}(P^{00}_\lambda,P^{00}) \leq 9|1-\lambda|$. We now proceed to estimate the second contribution, which boils down to getting estimates for the ratio~\eqref{eq:ratio}.
Since~\eqref{eq:condRWvarying} ensures that $a(X_{t-})+b(X_{t-}) = \kappa$ for all $t$, we can replace this in~\eqref{eq:RNrandomwalks} and get
\[
M(X) = \exp(-\kappa) \prod_{t\in \mathbb{U}{(X)}^+}a(X_{t-})\prod_{s\in \mathbb{U}{(X)}^-}b(X_{s-}),
\]
We claim that
\begin{claim}\label{claim:CTRW}For every $X\in \Pi([0,\,1];\Z)$ and uniformly over $u,\,v\in (0,\,1)$, $u\neq v$
\[
\nu\cdot{(\nu/\mu)}^{|\mathbb{U}{(X)}^+|}\le \frac{M(X+\mathbbm{1}_{[u,\,1]}-\mathbbm{1}_{[v,\,1]})}{M(X)}
\le \mu\cdot{(\mu/\nu)}^{|\mathbb{U}{(X)}^+|}.
\]
\end{claim}
It follows by the same technique as in Corollary~\ref{prop:finalBoundCTRWrev} and Claim~\ref{claim:CTRW}
that
\[
d_{W,1}(P^{00}_\lambda,\bfP^{00}) \leq 9 E_{\bfP^{00}}\left[\left|\mu{(\mu/\nu)}^{|\mathbb{U}{(X)}^+|}-\lambda\right|\vee
\left|\nu{(\nu/\mu)}^{|\mathbb{U}{(X)}^+|}-\lambda\right|\right].
\]
We see that choosing $\lambda: = \sqrt{\mu \nu}$ entails
\[
\left|\mu{(\mu/\nu)}^{|\mathbb{U}{(X)}^+|}-\sqrt{\mu\nu}\right|\vee \left|\nu{(\nu/\mu)}^{|\mathbb{U}{(X)}^+|}-\sqrt{\mu\nu}\right|
\leq {\mu(\mu/\nu)}^{|\mathbb{U}{(X)}^+|} - \sqrt{\mu \nu},
\]
thus, all is left to do is to find a bound for
\[
\mu E_{\bfP^{00}}\left[{(\mu/\nu)}^{|U^+|}\right],
\]
where with a slight abuse of notation we have pushed forward the measure $E_{\bfP^{00}}$ via $\mathbb{U}$, thus calling $U:=\mathbb{U}(X)$.
Hence it will suffice to bound the exponential moments of $|U^+|$. Introduce, for $t\in \R$, the Laplace transform $\phi(t):=\mathbb E_{\bfP^{00}}[\exp(t|U^+|)]$ under $\bfP^{00}$ as well as $\xi(t):=E_{P^{00}}\left[\exp(t|U^+|)\right]$. By change of measure
\begin{align*}
\phi(t)=\frac{E_{P^{00}}[\e^{t|U^+|+\log M(\mathbb{X}(U))}]}{E_{P^{00}}[\e^{\log M(\mathbb{X}(U))}]}.
\end{align*}
Since $|U^+|=|U^-|$ and~\eqref{eq:condRWvarying} holds, one can derive the bound
\[
{|U^+|}\log \nu -\kappa\le\log M(\mathbb{X}(U))\le {|U^+|}\log \mu-\kappa
\]
for every $U$, so that
we get the following two-sided estimate:
\[
\frac{\xi(\log \nu+t)}{\xi(\log\mu)}\leq\phi(t)\le\frac{\xi(\log \mu+t)}{\xi(\log\nu)}.
\]
Under the law $P^{00}$, $|U^+|$ has the law described in Subsection~\ref{subsec:PoiDiag}, that is, $\mathrm{Poi}(1)\otimes\mathrm{Poi}(1)$ conditioned on the diagonal.
A direct computation on the Laplace transform following from~\eqref{eq:rep_Q_Poisson} yields that
\[
\xi(t)=\frac{I_0(2 \e^{t/2})}{I_0(2)}.
\]
Therefore we can set $t:=\log (\mu/\nu)$ and obtain
\begin{align}\label{eq:final}
\mu E_{\bfP^{00}}\left[{(\mu/\nu)}^{d(U,\,\mathbf{0})}\right] = \mu \phi(\log (\mu/\nu))
\leq \mu\cdot \frac{I_0(2\mu/\sqrt{\nu})}{I_0(2\sqrt{\nu})}.
\end{align}
Finally~\eqref{eq:final}, together with~\eqref{eq:triangular} and the fact that we chose $\lambda = \sqrt{\mu \nu}$, gives the bound.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim:CTRW}]
We prove that $\bfP^{00}$-almost surely
\begin{equation}\label{eq:loopcount} M(X) \leq \exp(- \kappa)\mu^{|\bbU^+(X)|}.
\end{equation}
An identical arguments can then be used to show that
$M(X) \geq \nu^{U^+(X)}$; the claim then easily follows observing that $|\bbU^+(X+\mathbbm{1}_{[u,\,1]}-\mathbbm{1}_{[v,\,1]}))| = |\bbU^+{(X)}|+1$. Let us prove~\eqref{eq:loopcount} by induction on $|\bbU^+{(X)}|$.
The case $|\bbU^+ (X)|=0$ is obvious, since the only path verifying this condition which is also in the support of $\bfP^{00}$ is the zero path.
Let $|\bbU^+ |=n+1$. Then either $X_t>0$ for some $t \in (0,1)$ or $X_t<0$ for some $t \in (0,1)$; we assume w.l.o.g.\ that the first condition is met. Define $ M:= \max_{t \in [0,1] } X_t$, $\tau_M := \inf \{ t: X_t=M \}$, and $\theta_M$ as the first jump time of $X$ after $\tau_M$. Observe that, by construction,
\begin{equation}\label{eq:loopcount2}
\tau_M \in \bbU{(X)}^+,\,\theta_M \in \bbU{(X)}^-, \quad \text{and }\, (X_{\tau_M-},X_{\tau_M},X_{\theta_M})=(M-1,M,M-1).
\end{equation}
Consider now the path
$Z$ obtained by removing the jumps at $\tau_M,$ $\theta_M$, i.e.
\[ Z = X-\mathbbm{1}_{[\tau_M,\,1]}+\mathbbm{1}_{[\theta_M,\,1]}.\]
By construction $X_t$ and $Z_t$ coincide outside $[\tau_M,\theta_M)$ and $Z_t$ makes no jumps in $[\tau_M,\theta_M]$, whereas in the same interval $X$ goes first from $M-1$ to $M$ (at $\tau_M$) and then from $M$ to $M-1$ (at $\theta_M$), see~\eqref{eq:loopcount2}. Thus we have
\[ M(X) = M(Z) a(M-1)b(M).\]
Since $|\bbU{(Z)}^+| = n$, the conclusion follows using the inductive hypothesis and~\eqref{eq:condRWvarying}.
\end{proof}
\subsection{An approximation scheme for the simple random walk bridge}\label{subsec:schemesRW}
In this Subsection we will be interested in schemes for approximating the continuous-time random walk bridge with rates $j_+=j_-:=1$. Its law $P^{00}$ has been defined in Subsection~\ref{subsec:CTRW}.
Let $N\in \bbN$ be fixed. Consider a sufficiently large probability space $(\Omega,\mathcal{F},Q)$ on which we can define independent random variables $\xi_1,\,\ldots,\,\xi_N$, $\tau_1,\,\ldots \,\tau_N$ such that for all $j=1,\,\ldots,\,n$
\[
Q(\xi_j =1)=Q(\xi_j =-1)=1/N,\quad Q(\xi_j=0)=1-2/N
\]
and $\tau_{j}$ is uniformly distributed on $I_j:= ((j-1)/N,\,j/N]$. We define the process $Y$ with values in $\bbD([0,1];\bbZ)$ via
\[ Y_t:=\sum_{j=1}^N \xi^N_j \IND_{[\tau_j,\,1]}(t),\quad t\in[0,\,1], \]
and call $P^0_N$ its law. Let $P^{00}_N$ be the distribution of its bridge:
\[ P^{00}_N(\cdot)=P^{0}_N( \cdot |Y_1=0). \]
The bridge measure $P^{00}_N$ is clearly supported in $\Pi([0,1];\bbZ)$ which is in bijection with $\mathscr{U}$. As in the previous sections, we shall still use the notation $P^{00}_N$ for the pushforward of $P^{00}_N$ through $\bbU$.
In this subsection we shall prove the following.
\begin{theorem}\label{thm:scheme}
For all $N\in \bbN$ we have
\[ d_{W,1}(P^{00}_N ,P^{00}) \leq \frac{1}{N}\cdot \frac{9 \left(9 N^3-54 N^2+64 N-16\right)}{{(N-2)}^3}.
\]
\end{theorem}
The theorem will be proved at the end of the Section. As usual, we make no distinction between $P^{00}_N$ and its push forward through $\bbU$.
The first step towards the proof of the result is exhibiting a dynamics for which $P^{00}_N$ is invariant. Therefore for every $U\in \mathscr{U}$ define
\[
\mathscr{B}(U):= \Big \{(r,s) \in \mathscr{A}:\,\lceil r N\rceil \neq \lceil s N\rceil\text{ and }(I_{\lceil r N\rceil } \cup I_{\lceil s N\rceil}) \cap (U^+ \cup U^-) =\emptyset\Big \}
\]
where recall that $\mathscr A:={(0,\,1)}^2\setminus\Delta$.
Consider the operator defined for any bounded measurable function $f:\mathscr{U} \rightarrow \bbR$ as
\begin{equation}
\mathscr{L}^N f(U) \ldef{ \left( 1-\frac{2}{N}\right)}^{-2} \int_{\mathscr{B}(U)} (f( \Psi_{r,s} U ) - f(U)) \De r \De s+ \sum_{(r,s) \in U^+ \times U^-} (f( \Psi_{r,s}U )-f(U)).\label{eq:formLN}
\end{equation}
We will show below that such an operator admits $P^{00}_N$ as invariant distribution. To do so, first we want to calculate $\De P^{00}_N/\De P^{00}$. In fact, the knowledge of the Radon--Nikodym derivative can be used to derive an integration by parts formula for $P^{00}_N$ by bootstrapping that of $P^{00}$ in the spirit of~\eqref{eq:genLeibniz} and subsequent discussion.
\begin{lemma}
Let
\begin{equation}\label{eq:scheme_support} \mathcal{S}= \{ U: |(U^+ \cup U^-) \cap I_{j}| \leq 1 \,\text{ for all } \, j=1,\ldots,N \}.
\end{equation}
We have
\begin{equation*} \frac{\De P^{00}_N}{\De P^{00}}(U) = \frac{1}{Z} \IND_{\{U \in \mathcal{S}\}}{\left(1-\frac{2}{N}\right)}^{N-2|U^+|}.
\end{equation*}
\end{lemma}
\begin{proof}
Recall that $P^0$ is the law of the continuous time random walk started at $0$, without conditioning at the terminal point. We prove that
\begin{equation}\label{eq:scheme_dns}
\frac{\De P^0_N}{\De P^0}(U) = \e^2{\left(1-\frac{2}{N}\right)}^{N-2|U^+|}\IND_{\{U \in \mathcal{S}\}}=:M(U),
\end{equation}
The conclusion then follows from the fact that the conditional density $\frac{\De P^{00}}{\De P^{00}_N}$ is equal to $\frac{\De P}{\De P_N}$ up to a multiplicative constant, and that $P^{00}_N(|U|^+=|U^-|)=1$. It follows from the construction of $Y$ that $P^{00}_N(\mathcal{S})=1$. Moreover, we observe that a basis for the restriction to $\mathcal{S}$ of the canonical sigma algebra is given by events of the form
\begin{equation}\label{eq:event_form} A=\bigcap_{\substack{j=1, \ldots ,N \\ i=1,\ldots, L}} \{ |U^+ \cap (a^+_{ij}, b^+_{ij}]| = k^+_{ij} \} \cap \{ |U^- \cap (a^-_{ij},b^-_{ij}]| =k^-_{ij} \}
\end{equation}
where, for all $1 \leq j \leq N$, we have that $\sum_{i=1}^L (k^+_{ij}+k^{-}_{ij}) \leq 1$, that the intervals $\{(a^+_{ij}, b^+_{ij}]: \ i=1,\ldots,\, L\}$ form a disjoint partition of $I_{j}$ and so do the intervals $\{(a^-_{ij},\, b^-_{ij}]: \ i=1,\ldots ,\,L\}$.
Thus all what we have to show is that for an event $A$ as in~\eqref{eq:event_form} we have
\[ P^{0}_N(A)=E_{P^{0}}\left[M \IND_A\right] . \]
Since
\[M\equiv\e^2 {\left(1-\frac{2}{N}\right)}^{N-\sum_{i,j} (k^+_{ij}+k^{-}_{ij})}\]
on $A$, we can equivalently show that
\[ P^0_N(A)=\e^2 {\left(1-\frac{2}{N}\right)}^{N-\sum_{i,j} (k^+_{ij}+k^{-}_{ij})}P^0(A). \]
To check this define
\[J^+ := \left \{ j: \sum_{i=1}^L k^+_{ij}=1 \right \}, \quad J^- := \left \{ j: \sum_{i=1}^L k^-_{ij}=1 \right \} \]
and for all $j \in J^+$ (resp. $J^-$) define $i_j$ as the only index such that $k^{+}_{i_j j}=1$ (resp. $k^{-}_{i_j j}=1$). Then, since $U^+$ and $U^-$ under $P^0$ are distributed as a Poisson point process with mean measure the Lebesgue measure,
\[ P^0(A) = \e^{-2}\prod_{j \in J^+} (b^+_{i_j j} - a^+_{i_j j} )\prod_{j \in J^-} (b^-_{i_j j} - a^-_{i_j j} ) \]
On the other hand, using the explicit construction of $Y$,
\begin{align*}
P^0_N(A) &= \prod_{j\in J^+} Q\left(\xi_j=1,\tau_j \in [a^+_{i_j j},b^+_{i_j j} ) \right)\, \prod_{j\in J^-} Q\left(\xi_j= -1,\tau_j \in [a^-_{i_j j},b^-_{i_j j} ) \right) \times \\
&\qquad\qquad\times \prod_{j\notin J^- \cup J^+} Q(\xi_j = 0) \\
&= {\left (1-\frac{2}{N} \right)}^{N- \sum_{i,j} (k^+_{ij} +k^-_{ij}) }\prod_{j \in J^+} (b^+_{i_j j} - a^+_{i_j j} )\prod_{j \in J^-} (b^-_{i_j j} - a^-_{i_j j})\\
& = \e^2 {\left(1-\frac{2}{N} \right)}^{N- \sum_{i,j} (k^+_{ij} +k^-_{ij}) } P^0(A),
\end{align*}
where we used the fact that $|J^+\cup J^-| =\sum_{i,j} (k^+_{i,j} +k^-_{ij}) $.
The Lemma is now proven.
\end{proof}
\begin{prop}
$P^{00}_N$ is invariant for $\mathscr{L}^N$, i.e. for all $f:\mathscr U\to \R$ bounded and measurable
\begin{equation}\label{eq:scheme_dynamics} E_{P^{00}_N}[\mathscr{L}^N f]=0.
\end{equation}
\end{prop}
\begin{proof}
Let $M$ be the density given in~\eqref{eq:scheme_dns} and $\mathcal{S}$ as in~\eqref{eq:scheme_support}. Using the fact that
\[
M(U)=0 \Rightarrow M(\Psi_{r,s}U) = 0, \quad (r,s)-\text{almost everywhere},
\]
we can reason exactly as in Proposition~\ref{prop:SteinRevCTRW}, to obtain that for all $f$ bounded and measurable
\[ E_{P^{00}_N} \left[ \int_{\mathscr{A}} (f( \Psi_{r,s} U ) - f(U)) \frac{M(\Psi_{r,s} U)}{M(U)} \De r\De s+ \sum_{(r,s) \in U^+ \times U^-} (f( \Psi_{r,s}U )-f(U)) \right]=0.
\]
Assume that $U \in \mathcal{S}$ and $(r,s)\notin \mathscr{B}(U)$; then either one among $I_{\lceil N r \rceil } \cap U^+,\, I_{\lceil N r \rceil } \cap U^-,\,I_{\lceil N s \rceil } \cap U^+,\, I_{\lceil N s\rceil } \cap U^-$ is non-empty or $\lceil N r \rceil = \lceil N s \rceil$.
Assume that $I_{\lceil r N\rceil } \cap U^+ \neq \emptyset$, the other cases being completely analogous. Then ${(\Psi_{r,s}U)}^+ \cup{(\Psi_{r,s}U)}^-$ has at least two points in $I_{\lceil r N\rceil }$ and thus is not in $\mathcal{S}$. Therefore
\[
(r,s) \notin \mathscr{B}(U) \Rightarrow \frac{M(\Psi_{r,s} U)}{M(U)} =0.
\]
In the same way, one can show that $(r,s) \in \mathscr{B}(U) \Rightarrow \Psi_{r,s}U \in \mathcal{S}$. Moreover, since ${(\Psi_{r,s}U)}^+$ has exactly one element more than $U^+$,
\[ \frac{M(\Psi_{r,s} U)}{M(U)} = {\left(1-\frac{2}{N}\right)}^{-2}. \]
Summing up,
\[ \frac{M(\Psi_{r,s} U)}{M(U)} = {\left(1-\frac{2}{N}\right)}^{-2} \IND_{\{(r,s) \in \mathscr{B}(U)\}} \quad P^{00}_N \text{-a.s.} \]
from which the conclusion follows.
\end{proof}
Having identified in $\mathscr{L}^N$ an operator which has $P^{00}_N$ as invariant distribution, we are ready to prove Theorem~\ref{thm:scheme}.
\begin{proof}[Proof of Theorem~\ref{thm:scheme}]
Arguing as in Corollary~\ref{prop:finalBoundCTRWrev}, we are left to evaluate
\[ d_{W,1}( P^{00}_N ,P^{00} ) \leq 9 \sup_{g \in \text{Lip}_1(\mathscr{U})} E_{P^{00}_N}\left[| \mathscr{L}g-\mathscr{L}^N g| \right]. \]
Using the explicit form of $\mathscr{L}$ in \eqref{eq:genRWonZ}, $\mathscr{L}^N$ in \eqref{eq:formLN} and the fact that $g$ is $1$-Lipschitz, we readily obtain the bound ($\lambda$ here denotes the Lebesgue measure on $[0,\,1]^2$)
\begin{eqnarray} |\mathscr{L}g-\mathscr{L}^N g| &\leq& \left({\left(1-\frac{2}{N}\right)}^{-2}-1 \right) \lambda(\mathscr{B}(U)) + \lambda(\mathscr{A} \setminus \mathscr{B}(U))\nonumber\\
&=&\left({\left(1-\frac{2}{N}\right)}^{-2}-1 \right) + \left(2-{\left(1-\frac{2}{N}\right)}^{-2} \right) \lambda (\mathscr{A} \setminus \mathscr{B}(U)).\label{eq:Lf-LNf}
\end{eqnarray}
Define for $1\leq i,\,j \leq N$, the square $S_{ij} := I_{i}\times I_{j}$ and for any $v \in U^+ \cup U^-$ the index $k^v$ as that of the interval $I_{k^v}$ containing $v$. Note that $k^v$ is $P^{00}_N$-almost surely a bijection. As a consequence the family ${\{ k^v \}}_{v \in U^+ \cup U^-}$ is made of $|U^+|+|U^-|=2|U^+|$ elements.
Also observe that, by definition of $\mathscr{B}(U)$, we have that
\begin{equation}\label{eq:occ_cond} S_{ij} \not\subset \mathscr{B}(U) \Rightarrow i=j\ \text{or one between}\ i,\,j \text{ equals $k^v$ for some
} v \in U^+ \cup U^-.
\end{equation}
Since there are less than $4N|U^+|+N$ pairs $(i,j)$ verifying~\eqref{eq:occ_cond}, then $\mathscr{A}\setminus \mathscr{B}(U)$ is contained in the union of at most $4N|U^+|+N$ squares, each having area $N^{-2}$. Thus we obtain the bound
\begin{equation}\label{eq:lAB}\lambda (\mathscr{A} \setminus \mathscr{B}(U)) \leq \frac{4}{N}|U^+| + \frac{1}{N}.
\end{equation}
All what is left to do is to estimate $E_{P^{00}_N}[|U^+|]$. Plugging $f(U):=|U^+|$ into~\eqref{eq:scheme_dynamics}, we obtain
\[ E_{P^{00}_N}[|U^+|^2] = {\left(1-\frac{2}{N}\right)}^{-2} E_{P^{00}_N}[ \lambda(\mathscr{B} ) ] \leq {\left(1-\frac{2}{N}\right)}^{-2}, \]
from which we deduce, after an application of Jensen's inequality that
\begin{equation} E_{P^{00}_N}[|U^+|]\leq {\left(1-\frac{2}{N}\right)}^{-1}.\label{eq:mean_salti} \end{equation}
The conclusion then follows taking the expectation under $P_N^{00}$ in~\eqref{eq:Lf-LNf} and using \eqref{eq:lAB}-\eqref{eq:mean_salti}.
\end{proof} | {"config": "arxiv", "file": "1710.08856/Bridges.tex"} |
\begin{document}
\title[Recombination and adaptation]{An Eco-Evolutionary approach of Adaptation and Recombination in a large population of varying size}
\author{Charline Smadi}
\address{Universit\'e Paris-Est, CERMICS (ENPC), F-77455 Marne La Vall\'ee, France and CMAP UMR 7641, \'Ecole Polytechnique CNRS, Route de Saclay,
91128 Palaiseau Cedex, France}
\email{[email protected]}
\begin{abstract}
We identify the genetic signature of a selective sweep in a population described by a birth-and-death process with density
dependent competition. We study the limit behaviour for large $K$, where $K$ scales the population size.
We focus on two loci: one under selection and one neutral. We distinguish a soft sweep occurring after an environmental change, from a hard sweep occurring
after a mutation, and express the neutral proportion variation as a function of the
ecological parameters, recombination probability $r_K$, and $K$. We show that for a hard sweep, two recombination regimes appear according to the order
of $r_K\log K$.
\end{abstract}
\maketitle
\section{Introduction}
There are at least two different ways of adaptation for a population: selection can either act on a new mutation
(hard selective sweep), or on preexisting
alleles that become advantageous after an environmental change (soft selective sweep from standing variation).
New mutations are sources
of diversity, and hard selective sweeps were until recently the only considered
way of adaptation. Soft selective sweeps from standing variation allow a faster adaptation to novel environments,
and their importance
is growing in empirical and theoretical studies
(Orr and Betancourt \cite{orr2001haldane}, Hermisson and Pennings \cite{hermisson2005soft}, Prezeworski, Coop and Wall
\cite{prezeworski2005signature}, Barrett and Schluter \cite{barrett2008adaptation}, Durand and al \cite{durand2010standing}).
In particular Messer and Petrov \cite{messer2013population} review a lot of evidence, from individual case
studies as well as from genome-wide scans, that soft sweeps (from standing variation and from recurrent mutations)
are common in a broad range of organisms.
These distinct selective sweeps entail different
genetic signatures in the vicinity of the novely
fixed allele, and the multiplication of genetic data available allows one to detect these signatures in current populations as described by Peter, Huerta-Sanchez
and Nielsen \cite{peter2012distinguishing}. To do this in an effective way, it is necessary to identify accurately the signatures left
by these two modes of adaptation.
We will not consider in this work the soft selective sweeps from recurrent mutations. For a study of these sweeps we refer to
\cite{pennings2006soft,pennings22006soft,hermisson2008pattern}.
\\
In this work, we consider a sexual haploid population of varying size, modeled by a birth and death process with density
dependent competition.
Each individual's ability to survive and reproduce depends on its own genotype and on the population state.
More precisely, each individual is
characterized by some ecological parameters:
birth rate, intrinsic death rate and competition kernel describing the competition with other individuals depending on their genotype.
The differential
reproductive success of individuals generated by their interactions entails progressive variations in the number of individuals carrying a given genotype.
This process,
called natural selection, is a key mechanism of evolution.
Such an eco-evolutionary approach has been introduced by Metz and coauthors in \cite{metz1996adaptive} and made
rigorous in the seminal paper of Fournier and M\'el\'eard \cite{fournier2004microscopic}.
Then it has been developed by
Champagnat, M\'el\'eard and coauthors (see
\cite{champagnat2006microscopic,
champagnat2011polymorphic,champagnat2014adaptation} and references therein) for
the haploid asexual case and by Collet, M\'el\'eard and Metz \cite{collet2011rigorous}
and Coron and coauthors \cite{coron2013slow,coron2014stochastic} for the diploid sexual case.
The recent work of Billiard and coauthors \cite{billiard2013stochastic} studies the dynamics of a two-locus model in an haploid asexual population.
Following these works, we introduce a parameter $K$ called carrying capacity which scales the population size, and study the limit
behavior for large $K$. But unlike them, we focus on two loci in a sexual haploid population and take into account recombinations: one locus is under selection and has
two possible alleles $A$ and $a$
and the second one is neutral with allele $b_1$ or $b_2$. When two individuals give birth, either a recombination occurs with probability $r_K$ and
the newborn inherits one allele from each parent, or he is the clone of one parent. \\
We first focus on a soft selective sweep from standing variation occurring after a change in the environment (new pathogen, environmental catastrophe, occupation of a new ecological
niche,...). We assume that before the change the alleles $A$ and $a$ were neutral and both represented a positive fraction of the
population, and that in the new environment the allele $a$ becomes favorable and goes to fixation.
We can divide the selective sweep in two periods: a first one where the population process is well approximated by the solution of a deterministic dynamical system, and a second one where $A$-individuals are
near extinction, the deterministic
approximation fails and the fluctuations of the $A$-population size become predominant.
We give the asymptotic value of the final neutral allele proportion as a
function of the ecological parameters, recombination probability $r_K$ and solutions of a two-dimensional competitive Lotka-Volterra
system. \\
We then focus on hard selective sweeps.
We assume that a mutant $a$ appears in a monomorphic $A$-population at ecological equilibrium.
As stated by Champagnat in \cite{champagnat2006microscopic}, the selective sweep is divided in three periods: during the first one, the resident population size stays near its equilibrium value,
and the mutant population size grows until it reaches a non-negligible fraction of the total population size.
The two other periods are the ones
described for the soft selective sweep from standing variation. Moreover, the time needed for the mutant $a$ to fix in the population is of order
$\log K$. We prove that the distribution of neutral alleles at the end of the sweep has
different shapes according to the order of the recombination probability per reproductive event $r_K$ with respect to $1/\log K$. More precisely, we
find two recombination regimes: a strong one where $r_K \log K$ is large, and a weak one where $r_K\log K$ is bounded.
In both recombination regimes, we give the asymptotic value of the final neutral allele proportion as a
function of the ecological parameters and recombination probability $r_K$.
In the strong recombination regime, the frequent exchanges of neutral alleles between the $A$ and $a$-populations yield an homogeneous neutral
repartition in the two populations and the latter is not modified by the sweep.
In the weak recombination regime, the frequency of the neutral allele carried by the first mutant increases because it is linked to the
positively selected allele.
This phenomenon has been called genetic hitch-hiking
by Maynard Smith and Haigh \cite{smith1974hitch}. \\
The first studies of hitch-hiking, initiated by Maynard Smith and Haigh \cite{smith1974hitch}, have modeled the mutant population size as
the solution of a deterministic logistic equation \cite{ohta1975effect,kaplan1989hitchhiking,stephan1992effect,
stephan2006hitchhiking}.
Kaplan and coauthors \cite{kaplan1989hitchhiking} described the neutral genealogies by a structured coalescent
where the background was the frequency of the
beneficial allele.
Barton \cite{barton1998effect}
was the first to point out the importance of the stochasticity of the mutant population size and the errors made by ignoring it.
He divided the sweep in four periods: the two last ones are the analogues of the two last steps described in \cite{champagnat2006microscopic}, and
the two first ones
correspond to the first one in \cite{champagnat2006microscopic}.
Following the approaches of \cite{kaplan1989hitchhiking} and \cite{barton1998effect},
a series of works studied the genealogies of neutral alleles sampled at the end of the sweep and took into account the randomness
of the mutant population size
during the sweep. In particular,
Durrett and Schweinsberg \cite{durrett2004approximating,schweinsberg2005random},
Etheridge and coauthors \cite{etheridge2006approximate}, Pfaffelhuber and Studeny \cite{pfaffelhuber2007approximating}, and Leocard \cite{stephanie2009selective}
described the population process by a structured coalescent and finely studied genealogies of neutral alleles during the sweep.
Eriksson
and coauthors \cite{eriksson2008accurate} described a deterministic approximation for the growth of the beneficial allele
frequency during a sweep, which leads to more accurate approximation than previous models for
large values of the recombination probability.
Unlike our model, in all these works, the population size was constant and the individuals' ``selective value'' did not depend on the population state,
but only on the individuals' genotype.\\
The structure of the paper is the following. In Section \ref{model} we describe the model, review some results of
\cite{champagnat2006microscopic} about the two-dimensional population process when we do not consider the neutral locus, and present the main results.
In Section \ref{prel_results} we state a semi-martingale decomposition of neutral proportions,
a key tool in the different proofs. Section \ref{proofstand} is devoted to the proof for the soft sweep from standing variation. It relies on a
comparison of the population process with a four dimensional dynamical system.
In Section \ref{section_couplage} we describe a coupling of the population process with two birth and death processes widely
used in Sections
\ref{proofstrong} and \ref{proofweak}, respectively devoted to the proofs for the strong and the weak recombination regimes
of hard sweep.
The proof for the weak regime requires a fine study of the genealogies in a structured
coalescent process during the first phase of the selective sweep. We use here some ideas developed in \cite{schweinsberg2005random}.
Finally in the Appendix we state technical results.\\
This work stems from the papers of Champagnat \cite{champagnat2006microscopic} and Schweinsberg and Durrett \cite{schweinsberg2005random}.
In the sequel, $c$ is used to denote a positive finite constant. Its value can change from
line to line but it is always independent of the integer $K$ and the positive real number $\eps$.
The set $\N:=\{1,2,...\}$ denotes the set of positive integers.
\section{Model and main results}\label{model}
We introduce the sets $\mathcal{A}=\{A,a\}$, $\mathcal{B}=\{b_1,b_2\}$, and $\mathcal{E}=\{A,a\}\times \{b_1,b_2\}$ to describe
the genetic background of individuals.
The state of the population will be given by the four dimensional Markov process
$N^{(z,K)}=(N_{\alpha \beta}^{(z,K)}(t), (\alpha, \beta) \in \mathcal{E},t\geq 0)$ where $N_{\alpha \beta}^{(z,K)}(t)$ denotes the number of
individuals with alleles $(\alpha,\beta)$ at time $t$ when the carrying capacity is $K \in \N$ and the initial state is
$\lfloor zK \rfloor $ with $z=(z_{\alpha \beta}, (\alpha, \beta) \in \mathcal{E}) \in \R_+^\mathcal{E}$.
We recall that $b_1$ and $b_2$ are neutral, thus ecological parameters only
depend on the allele, $A$ or $a$, carried by the individuals at their first locus. There are the following:
\begin{enumerate}
\item[$\bullet$] For $\alpha \in \mathcal{A}$, $f_\alpha$ and $D_\alpha$ denote the birth rate and the intrinsic death rate of an individual carrying allele
$\alpha$.
\item[$\bullet$] For $(\alpha, \alpha') \in \mathcal{A}^2$, $C_{\alpha,\alpha'}$ represents the competitive pressure felt by an individual carrying
allele $\alpha$ from an individual carrying allele $\alpha'$.
\item[$\bullet$] $K \in \N$ is a parameter rescaling the competition between individuals. It can be interpreted as a scale of resources or area available,
and is related to the concept of carrying capacity, which is the maximum population size that the
environment can sustain indefinitely. In the sequel $K$ will be large.
\item[$\bullet$] $r_K$ is the recombination probability per reproductive event. When two individuals with respective genotypes
$(\alpha,\beta)$ and $(\alpha',\beta')$ in $\mathcal{E}$ give birth, the newborn
individual, either is a clone of one parent and carries alleles $(\alpha,\beta)$ or $(\alpha',\beta')$ each
with probability $(1-r_K)/2$,
or has a mixed genotype
$(\alpha,\beta')$ or $(\alpha',\beta)$ each with probability $r_K/2$.
\end{enumerate}
We will use, for every $n=(n_{\alpha\beta}, (\alpha, \beta) \in \mathcal{E}) \in \Z_+^\mathcal{E}$, and $(\alpha,\beta) \in \mathcal{E}$, the notations
\begin{equation*} \label{notR4} n_{\alpha }=n_{\alpha b_1}+n_{\alpha b_2} \quad \text{and} \quad |n|=n_A+n_a.\end{equation*}
Let us now give the transition rates of $N^{(z,K)}$ when $N^{(z,K)}(t)=n\in \Z_+^\mathcal{E}$.
An individual can die either from a natural death or from competition, whose strength depends on the carrying capacity $K$.
Thus, the cumulative death rate of individuals
$\alpha \beta$, with $(\alpha, \beta) \in \mathcal{E}$ is given by:
\begin{equation}\label{death_rate} {d}_{\alpha\beta}^K(n)=\left[ D_\alpha+C_{\alpha,A}n_{A}/K+C_{\alpha,a}n_{a}/K\right]{n_{\alpha\beta}}. \end{equation}
An individual carrying allele $\alpha \in \mathcal{A}$ produces gametes with rate $f_\alpha$, thus the relative frequencies of gametes available
for reproduction are
$$ p_{\alpha \beta}(n)=f_\alpha n_{\alpha \beta}/(f_A n_{A}+f_a n_{a}), \quad (\alpha, \beta) \in \mathcal{E}.$$
When an individual gives birth, he chooses his mate uniformly among the gametes available. Then the probability of giving birth to an individual
of a given genotype depends on the parents (the couple $((a,b_2),(a,b_1))$ is not able to generate an individual
$(A,b_1)$). We detail the computation of the cumulative birth rate of individuals $(A,b_1)$:
\begin{eqnarray*}
b_{Ab_1}^K(n)&=&f_A n_{Ab_1}[ p_{Ab_1}+{p_{Ab_2}}/{2}+{p_{ab_1}}/{2}+(1-r_K){p_{ab_2}}/{2}]+f_A n_{Ab_2}[ {p_{Ab_1}}/{2}+r_K{p_{ab_1}}/{2}]\\
&&+f_a n_{ab_1}[ {p_{Ab_1}}/{2}+r_K{p_{Ab_2}}/{2} ]+f_a n_{ab_2} (1-r_K){p_{Ab_1}}/{2}\\
&=&f_A n_{Ab_1} + r_K f_A f_a (n_{ab_1}n_{Ab_2}-{n_{Ab_1}}n_{ab_2})/({f_A n_{A}+f_a n_{a}}).
\end{eqnarray*}
If we denote by $\bar{\alpha}$ (resp. $\bar{\beta}$) the complement of $\alpha$ in $\mathcal{A}$ (resp. $\beta$ in $\mathcal{B}$),
we obtain in the same way the cumulative birth rate of individuals $(\alpha ,\beta)$:
\begin{equation}\label{birth_rate} {b}^K_{\alpha\beta}(n)=f_\alpha n_{\alpha\beta} + r_K f_a f_{A}\frac{n_{\bar{\alpha}\beta}n_{\alpha\bar{\beta}}
-n_{\alpha\beta}n_{\bar{\alpha}\bar{\beta}}}{f_A n_{A}+f_a n_{a}}, \quad (\alpha, \beta) \in \mathcal{E}.\end{equation}
The definitions of death and birth rates in \eqref{death_rate} and \eqref{birth_rate} ensure that the number of jumps is finite on every finite interval, and the population process is well defined.\\
When we focus on the dynamics of traits under selection $A$ and $a$, we get the process \\$ (N^{(z,K)}_A,N^{(z,K)}_a) $, which is
also a birth and death process with competition.
It has been studied in \cite{champagnat2006microscopic} and its cumulative death and birth rates, which are direct consequences of \eqref{death_rate} and \eqref{birth_rate}, satisfy for $\alpha \in \mathcal{A}$:
\begin{equation}\label{defdaba}
{d}_{\alpha}^K(n)=\underset{\beta \in \mathcal{B}}{\sum} {d}_{\alpha \beta}^K(n)=\Big[ D_\alpha+C_{\alpha,A}\frac{n_{A}}{K}+C_{\alpha,a}\frac{n_{a}}{K}\Big]{n_{\alpha}}, \quad {b}^K_{\alpha}(n)=\underset{\beta \in \mathcal{B}}{\sum} {b}_{\alpha \beta}^K(n)=f_\alpha n_{\alpha}.
\end{equation}
It is proven in \cite{champagnat2006microscopic} that when $N^{(z,K)}_A$ and $N^{(z,K)}_a$ are of order $K$,
the rescaled population process $(N^{(z,K)}_A/K,N^{(z,K)}_a/K)$ is well
approximated by the dynamical system:
\begin{equation} \label{S1}
\dot{n}_\alpha^{(z)}=(f_\alpha-D_\alpha-C_{\alpha,A}n_A^{(z)}-C_{\alpha,a}n_a^{(z)})n_\alpha^{(z)},\quad n_\alpha^{(z)}(0)=z_\alpha,\quad \alpha \in \mathcal{A}.
\end{equation}
More precisely Theorem 3 (b) in \cite{champagnat2006microscopic} states that for every compact
subset
$$B \subset (\R_+^{A\times \mathcal{B}}\smallsetminus (0,0)) \times (\R_+^{a\times \mathcal{B}}\smallsetminus (0,0))$$
and finite real number $T$, we have
for any $\delta>0$,
\begin{equation}\label{result_champa1} \underset{K \to \infty}{\lim}\ \underset{z \in B}{\sup}\ \P\Big(\underset{0\leq t \leq T,\alpha \in \mathcal{A}}
{\sup} |N^{(z,K)}_\alpha(t)/K-{n}^{(z)}_\alpha(t)|\geq \delta \Big)=0.\end{equation}
Moreover, if we assume
\begin{equation}\label{assumption}
f_A>D_A, \quad f_a>D_a, \quad \text{and} \quad f_a-D_a>(f_A-D_A).\sup \Big\{{C_{a,A}}/{C_{A,A}}, {C_{a,a}}/{C_{A,a}}\Big\},
\end{equation}
then the dynamical system \eqref{S1} has a unique attracting equilibrium $(0,\bar{n}_a)$ for initial condition $z$ satisfying $z_a>0$,
and two unstable steady states $(0,0)$ and $(\bar{n}_A,0)$, where
\begin{equation}\label{defnbara1}
\bar{n}_\alpha=\frac{f_\alpha-D_\alpha}{C_{\alpha,\alpha}}>0,\quad \alpha \in \mathcal{A}.
\end{equation}
Hence, Assumption (\ref{assumption}) avoids the coexistence of alleles $A$ and $a$, and $\bar{n}_\alpha$ is the equilibrium density
of a monomorphic $\alpha$-population per unit of carrying capacity. This implies that when $K$ is large, the size of
a monomorphic $\alpha$-population stays near $\bar{n}_\alpha K$ for a long time (Theorem 3 (c) in \cite{champagnat2006microscopic}).
Moreover, if we introduce the invasion fitness $S_{\alpha \bar{\alpha}}$ of a mutant $\alpha$ in a population $\bar{\alpha}$,
\begin{equation}\label{deffitinv1}
S_{\alpha \bar{\alpha}}=f_\alpha-D_\alpha-C_{\alpha,\bar{\alpha}}\bar{n}_{\bar{\alpha}} ,\quad \alpha \in \mathcal{A},
\end{equation}
it corresponds to the per capita growth
rate of a mutant $\alpha$ when it appears in a population $\bar{\alpha}$ at its equilibrium density $\bar{n}_{\bar{\alpha}}$. Assumption
\eqref{assumption} is equivalent to
\begin{ass}\label{assumption_eq} Ecological parameters satisfy
$$ \bar{n}_A>0, \quad \bar{n}_a>0,\quad \text{and} \quad S_{Aa}<0<S_{aA}.$$
\end{ass}
\noindent Under Assumption \ref{assumption_eq}, with positive probability, the $A$-population becomes extinct and the $a$-population size reaches a vicinity of its
equilibrium value $\bar{n}_a K$.\\
The case we are interested in is referred in population genetics as soft selection \cite{wallace1975hard}:
it is both frequency and density dependent. This kind of selection has no influence on the order of the total population size, which has the same order as the carrying capacity $K$.
However, the factor multiplying the carrying capacity can be modified, as the way the individuals use the resources
depends on the ecological parameters.
We focus on strong selection coefficient, which are caracterized by $S_{aA}\gg 1/K$. In this case the selection outcompetes
the genetic drift. However we do not need to assume $S_{aA}\ll 1$ to get approximations unlike
\cite{smith1974hitch,barton1998effect,stephan2006hitchhiking}.
To study the genealogy of the selected allele when the selection coefficient is weak ($S_{aA}K$ moderate or small)
we refer to the approach of Neuhauser and Krone \cite{neuhauser1997genealogy}.\\
Let us now present the main results of this paper. We introduce the extinction time of the $A$-population, and the fixation event
of the $a$-population. For $(z,K) \in \R_+^{\mathcal{E}}\times \N$:
\begin{equation}\label{defText}T^{(z,K)}_{\text{ext}}:=\inf \Big\{ t \geq 0, N_A^{(z,K)}(t)=0 \Big\},\quad \text{and} \quad
\text{Fix}^{(z,K)}:=\Big\{T^{(z,K)}_{\text{ext}}<\infty, N_a^{(z,K)}(T^{(z,K)}_{\text{ext}})>0\Big\} .\end{equation}
We are interested in the neutral allele proportions. We thus define for $t \geq 0$,
\begin{equation}\label{def_proportion} P_{\alpha, \beta}^{(z,K)}(t) = \frac{N^{(z,K)}_{\alpha \beta}(t)}{N^{(z,K)}_{\alpha}(t)}, \quad (\alpha,\beta) \in \mathcal{E}, K\in \N, z \in \R_+^{\mathcal{E}},\end{equation}
the proportion of alleles $\beta$ in the $\alpha$-population at time $t$.
More precisely, we are interested in these proportions at the end of the sweep, that is at time $T^{(z,K)}_{\text{ext}}$ when the last $A$-individual
dies. We then introduce the neutral proportion at this time:
\begin{equation}\label{def_proportiontext} \mathcal{P}_{a, b_1}^{(z,K)}=P_{a, b_1}^{(z,K)}(T^{(z,K)}_{\text{ext}}). \end{equation}
We first focus on soft selective sweeps from standing variation. We assume that the alleles $A$ and $a$ were neutral and coexisted in a population with large carrying capacity $K$.
At time $0$, an environmental change makes the allele $a$ favorable (in the sense of Assumption \ref{assumption_eq}). Before stating the result,
let us introduce the function $F$, defined for every $(z,r,t) \in (\R_+^{\mathcal{E}})^* \times [0,1] \times \R_+$ by
\begin{equation}\label{defF}
F(z,r,t)=\int_0^t \frac{rf_Af_an_A^{(z)}(s)}{f_An_A^{(z)}(s)+f_an_a^{(z)}(s)}\exp\Big( -rf_Af_a\int_0^s \frac{n_A^{(z)}(u)+n_a^{(z)}(u)}{f_An_A^{(z)}(u)+f_an_a^{(z)}(u)}du \Big)ds,
\end{equation}
where $(n_A^{(z)},n_a^{(z)})$ is the solution of the dynamical system \eqref{S1}.
We notice that $F:t \in \R^+ \mapsto F(z,r,t)$ is non-negative and non-decreasing. Moreover, if we introduce the function
$$h: (z,r,t) \in (\R_+^{\mathcal{E}})^* \times [0,1] \times \R_+ \mapsto rf_Af_a\int_0^t {n_A^{(z)}(s)}/({f_An_A^{(z)}(s)+f_an_a^{(z)}(s)})ds$$
non-decreasing
in time, then
$$ 0\leq F(z,r,t)\leq \int_0^t \partial_s h(z,r,s)e^{-h(z,r,s)}ds=e^{-h(z,r,0)}-e^{-h(z,r,t)}\leq 1. $$
Thus $F(z,r,t)$ has a limit in $[0,1]$ when $t$ goes to infinity and we can define
\begin{equation}\label{limF} F(z,r):=\lim_{t \to \infty}F(z,r,t) \in [0,1] .\end{equation}
Noticing that for every $r \in [0,1]$ and $t \geq 0$,
$$ 0\leq F(z,r)- F(z,r,t)\leq \int_t^\infty \frac{f_Af_an_A^{(z)}(s)}{f_An_A^{(z)}(s)+f_an_a^{(z)}(s)}ds \underset{t\to \infty}{\to}0,$$
we get that the convergence of $(F(z,r,t), t \geq 0)$ is uniform for $r \in [0,1]$.
In the case of a soft sweep from standing variation, the selected allele gets to fixation with high probability. More precisely,
it is proven in \cite{champagnat2006microscopic} that under Assumption \ref{assumption_eq},
\begin{equation}\label{convfixcas1} \underset{K \to \infty}{\lim}\P(\text{Fix}^{(z,K)})= 1,
\quad \forall z \in \R_+^{A\times \mathcal{B}} \times (\R_+^{a\times \mathcal{B}}\setminus (0,0)).\end{equation}
\noindent Then recalling \eqref{def_proportiontext} we get the following result whose proof is deferred to Section \ref{proofstand}:
\begin{theo} \label{main_result2}
Let $z$ be in $\R_+^{A\times \mathcal{B}} \times (\R_+^{a\times \mathcal{B}}\setminus (0,0))$ and Assumption \ref{assumption_eq} hold.
Then on the fixation event $\textnormal{Fix}^{(z,K)}$, the proportion of alleles $b_1$ when the $A$-population becomes extinct
(time $T_{\text{ext}}^{(z,K)}$) converges in probability:
$$ \underset{K \to \infty}{\lim}\P \Big(\mathbf{1}_{\textnormal{Fix}^{(z,K)}}\Big| \mathcal{P}_{a,b_1}^{(z,K)}-\Big[\frac{z_{Ab_1}}{z_A}F(z,r_K) +\frac{z_{ab_1}}{z_a}(1-F(z,r_K))\Big]\Big| >\eps\Big) = 0
, \quad \forall \eps>0. $$
\end{theo}
The neutral proportion at the end of a soft sweep from standing variation is thus a weighted mean of initial proportions in populations $A$ and $a$.
In particular, a soft
sweep from standing variation is
responsible for a diminution of the number of neutral alleles with very low or very high proportions in the population, as remarked in
\cite{prezeworski2005signature}. We notice that the
weight $F(z,r_K)$ does not depend on the initial neutral proportions. It
only depends on $r_K$ and on the dynamical system \eqref{S1} with initial condition $(z_A,z_a)$.
The proof consists in comparing the population process with the four dimensional dynamical system,
\begin{equation} \label{syst_dyn}
\dot{n}_{\alpha \beta}^{(z,K)} = \left( f_\alpha -D_\alpha-C_{\alpha,A}{n}^{(z,K)}_{A}-C_{\alpha,a}{n}^{(z,K)}_{a}\right){n}^{(z,K)}_{\alpha \beta}+
rf_A f_a\frac{ {n}^{(z,K)}_{\bar{\alpha} \beta}{n}^{(z,K)}_{\alpha \bar{\beta}}-{n}^{(z,K)}_{\alpha \beta}
{n}^{(z,K)}_{\bar{\alpha}\bar{\beta}}}{f_A {n}^{(z,K)}_{A}+f_a {n}^{(z,K)}_{a}}, \quad (\alpha,\beta)\in \mathcal{E},
\end{equation}
with initial condition $n^{(z,K)}(0)=z \in \R_+^\mathcal{E}$. Then by a change of variables, we can study the dynamical system \eqref{syst_dyn}
and prove that
\begin{equation*}\frac{n^{(z,K)}_{a,b_1}(\infty)}{n^{(z,K)}_{a}(\infty)}= \frac{z_{Ab_1}}{z_A} F(z,r_K)
+\frac{z_{ab_1}}{z_a}(1-F(z,r_K)),\end{equation*} which leads to the result.\\
Now we focus on hard selective sweeps: a mutant $a$ appears in a large population and gets to fixation. We assume that the mutant appears when the
$A$-population is at ecological equilibrium, and carries the neutral allele $b_1$. In other words, recalling Definition \eqref{defnbara1}, we assume:
\begin{ass}\label{defrK}
There exists $z_{Ab_1} \in ]0,\bar{n}_A[$ such that $N^{(z^{(K)},K)}(0)=\lfloor z^{(K)}K\rfloor$ with
$$ z^{(K)}=(z_{Ab_1} ,\bar{n}_A-z_{Ab_1},K^{-1}, 0). $$
\end{ass}
\noindent In this case, the selected allele gets to fixation with positive probability.
More precisely, it is proven {in \cite{champagnat2006microscopic} that} under Assumptions \ref{assumption_eq} and \ref{defrK},
\begin{equation} \label{proba_fix}\underset{K \to \infty}{\lim}\P\Big(\text{Fix}^{(z^{(K)},K)}\Big)= \frac{S_{aA}}{f_a}.\end{equation}
In the case of a strong selective sweep we will distinguish two different recombination regimes:
\begin{ass}\label{condstrong} Strong recombination
$$\lim_{K \to \infty}\ r_K\log K=\infty .$$
\end{ass}
\begin{ass} \label{condweak} Weak recombination
$$\limsup_{K \to \infty}\ r_K\log K<\infty .$$
\end{ass}
\noindent Recall \eqref{def_proportiontext} and introduce the real number
\begin{equation}
\label{defrhoK} \rho_K:= 1-\exp \Big( -\frac{f_ar_K\log K}{S_{aA}} \Big).
\end{equation}
Then we have the following results whose proofs are deferred
to Sections \ref{proofstrong} and \ref{proofweak}:
\begin{theo} \label{main_result}
Suppose that Assumptions \ref{assumption_eq} and \ref{defrK} hold. Then on the fixation event $\textnormal{Fix}^{(z^{(K)},K)}$ and under Assumption \ref{condstrong}
or \ref{condweak}, the proportion of alleles $b_1$ when the $A$-population becomes extinct (time $T_{\text{ext}}^{(z^{(K)},K)}$) converges in probability. More precisely,
if Assumption \ref{condstrong} holds,
$$ \lim_{K \to \infty}\P \Big(\mathbf{1}_{\textnormal{Fix}^{(z^{(K)},K)}}\Big|\mathcal{P}_{a,b_1}^{(z^{(K)},K)}
-\frac{z_{Ab_1}}{z_A}\Big|>\eps\Big) = 0
, \quad \forall \eps>0, $$
and if Assumption \ref{condweak} holds,
$$ \lim_{K \to \infty}\P \Big(\mathbf{1}_{\textnormal{Fix}^{(z^{(K)},K)}}\Big|\mathcal{P}_{a,b_1}^{(z^{(K)},K)}
-\Big[ (1-\rho_K)+ \rho_K\frac{z_{Ab_1}}{z_A}\Big]\Big|>\eps\Big)= 0
, \quad \forall \eps>0. $$
\end{theo}
As stated in \cite{champagnat2006microscopic}, the selective sweep has a duration of order $\log K$. Thus, when $r_K \log K$ is large, a lot
of recombinations occur during the sweep, and the neutral alleles are constantly exchanged by the
populations $A$ and $a$. Hence in the strong recombination case, the sweep does not modifiy the neutral
allele proportion. On the contrary, when $r_K$
is of order $1/\log K$ the number of recombinations undergone by a given lineage does not go to infinity, and the frequency of the neutral
allele $b_1$ carried by the first mutant $a$ increases.
More precisely, we will show that the probability for a neutral lineage to undergo a recombination and be descended from an individual
of type $A$ alive at the beginning of the sweep is close to $\rho_K$.
Then to know
the probability for such an allele to be
a $b_1$ or a $b_2$, we have to approximate the proportion of alleles $b_1$ in the $A$-population
when the recombination occurs.
We will prove that this proportion stays close to the initial one $z_{Ab_1}/z_A$ during the first phase.
With probability $1-\rho_K$, a neutral allele originates from the first mutant.
In this case it is necessarily a $b_1$. This gives the result for the weak recombination regime.
In fact the probability for a neutral lineage to undergo no recombination during the first phase is quite intuitive: broadly speaking, the probability to have
no recombination at a birth
event is $1-r_K$, the birth rate is $f_a$ and the duration of the first phase is $\log K/S_{aA}$.
Hence as $r_K$ is small for large $K$, $1-r_K \sim \exp(-r_K)$ and the probability to undergo no recombination is approximately
$$ (1-r_K)^{f_a\log K/S_{aA}}\sim \exp(-r_K{f_a\log K/S_{aA}})=1-\rho_K. $$
\begin{rem}
The limits in the two regimes are consistent in the sense that
$$\underset{r_K \log K \to \infty}{\lim} \rho_K=1. $$
Moreover, let us notice that we can easily extend the results of Theorems \ref{main_result2} and \ref{main_result} to a finite number of possible alleles $b_1$, $b_2$, ..., $b_i$ on the neutral locus.
\end{rem}
\begin{rem}
As it will appear in the proofs (see Sections \ref{proofstrong} and \ref{proofweak}), the final neutral proportion
in the $a$ population is already
reached at the end of the first phase. In particular, the results are still valid if the sweep is not complete but the allele
$a$ only reaches a fraction $0<p<1$ of the population at the end of the sweep. The fact that the final neutral proportion is
mostly determined by the beginning of the sweep has already been noticed by Coop and Ralph in \cite{coop2012patterns}.
\end{rem}
\section{A semi-martingale decomposition}\label{prel_results}
The expression of birth rate in \eqref{birth_rate} shows that the effect of recombination depends on the recombination probability
$r_K$ but also on the population state via the term $n_{\bar{\alpha}\beta}n_{\alpha\bar{\beta}}-n_{\alpha\beta}n_{\bar{\alpha}\bar{\beta}}$.
Proposition \ref{mart_prop} states a semi-martingale representation
of the neutral allele proportions and makes this interplay more precise.
\begin{pro}\label{mart_prop}
Let $(\alpha,z, K)$ be in $\mathcal{A}\times (\R_+^{\mathcal{E}})^*\times \N$. The process $(P_{\alpha,b_1}^{(z,K)}(t),t\geq 0)$ defined in
\eqref{def_proportion} is a semi-martingale and we have the following decomposition:
\begin{multline} \label{defM} P_{\alpha,b_1}^{(z,K)}(t)=P_{\alpha,b_1}^{(z,K)}(0)+ M^{(z,K)}_\alpha(t)\\+
r_Kf_A f_a \int_0^{t } {\mathbf{1}_{\{N_\alpha(s)\geq 1\}}} \frac{N_{\bar{\alpha}b_1}^{(z,K)}(s)N^{(z,K)}_{\alpha b_2}(s)-N^{(z,K)}_{\alpha b_1}(s)N^{(z,K)}_{\bar{\alpha}b_2}(s)}
{(N^{(z,K)}_{\alpha}(s)+1)(f_A N^{(z,K)}_{A}(s)+f_a N^{(z,K)}_{a}(s))}ds , \end{multline}
where the process $(M_\alpha^{(z,K)}(t),t\geq 0)$ is a martingale bounded on every interval $[0,t]$ whose quadratic variation is given by \eqref{crochet}.
\end{pro}
To lighten the presentation in remarks and proofs we shall mostly write $N$ instead of $N^{(z,K)}$.
\begin{rem} \label{remLD}
The process $N_{ab_2}N_{Ab_1}-N_{ab_1}N_{Ab_2}$ will play a major role in the dynamics of neutral proportions. Indeed it is a
measure of the neutral proportion disequilibrium between the $A$ and $a$-populations as it satisfies:
\begin{equation}\label{equaldiffprop} N_{A}N_{a}(P_{A,b_1}-P_{a,b_1})={N_{ab_2}N_{Ab_1}-N_{ab_1}N_{Ab_2}}. \end{equation}
This quantity is linked with the linkage disequilibrium of the population, which is the occurrence of some allele combinations more or less often
than would be expected from a random formation of haplotypes (see \cite{durrett2008probability} Section 3.3 for an introduction to this
notion or \cite{mcvean2007structure} for a study of its structure around a sweep).
\end{rem}
\begin{rem}
By taking the expectations in Proposition \ref{mart_prop} we can make a comparison with the results of Ohta and Kimura \cite{ohta1975effect}.
In their work the population size is infinite and the proportion of favorable allele $(y_t, t \geq 0)$ evolves as a deterministic logistic curve:
$$ \frac{dy_t}{dt}=sy_t(1-y_t) .$$
Moreover, $x_1$ and $x_2$ denote the neutral proportions of a given allele in the selected and non selected
populations respectively, and are modeled
by a diffusion.
By making the analogies
$$ N_e(t)=N_A(t)+N_a(t), \quad y_t=\frac{N_a(t)}{N_A(t)+N_a(t)}, \quad x_1(t)=\frac{N_{ab_1}(t)}{N_a(t)}, \quad x_2(t)=\frac{N_{Ab_1}(t)}{N_A(t)} ,$$
where $N_e$ is the effective population size, the results of \cite{ohta1975effect} can be written
$$ \frac{d\E[P_{\alpha,b_1}(t)]}{dt}=r \frac{\E[N_{\bar{\alpha}b_1}(t)N_{\alpha b_2}(t)-N_{\alpha b_1}(t)N_{\bar{\alpha}b_2}(t)]}
{N_{\alpha}(t)(N_{A}(t)+ N_{a}(t))}, $$
and
\begin{eqnarray*} \frac{d\E[P^2_{\alpha,b_1}(t)]}{dt}&=& \frac{\E[P_{\alpha b_1}(t)(1-P_{\alpha b_2}(t))]}{2N_\alpha(t)}+
2r \frac{\E[N_{\alpha b_1}(t)(N_{\bar{\alpha}b_1}(t)N_{\alpha b_2}(t)-N_{\alpha b_1}(t)N_{\bar{\alpha}b_2}(t))]}
{N_{\alpha}^2(t)(N_{A}(t)+ N_{a}(t))}. \end{eqnarray*}
Hence the dynamics of the first moments are very similar to these that we obtain when we take equal birth rates $f_A=f_a$ and a recombination $r_K=r/f_a$.
In contrast, the second moments of neutral proportions are very different in the two models.
\end{rem}
\begin{proof}[Proof of Proposition \ref{mart_prop}]
In the vein of Fournier and M\'el\'eard \cite{fournier2004microscopic} we represent the population process in terms
of Poisson measure. Let $Q(ds,d\theta)$
be a Poisson random measure on $\R_+^2$ with intensity $ds d\theta$, and $(e_{\alpha \beta}, (\alpha, \beta)\in \mathcal{E} )$ the canonical
basis of $\R^\mathcal{E}$.
According to \eqref{defdaba} a jump occurs at rate
$$\sum_{(\alpha,\beta)\in \mathcal{E}}(b_{\alpha \beta}^K(N)+d_{\alpha \beta}^K(N))=f_aN_a+d_a^K(N)+f_AN_A+d_A^K(N).$$
We decompose on possible jumps that may occur: births and deaths for $a$-individuals and births and deaths for $A$-individuals.
Itô's formula with jumps (see \cite{ikeda1989stochastic} p. 66) yields for
every function $h$ measurable and bounded on $\R_+^{\mathcal{E}}$:
\begin{eqnarray}\label{defN}
h(N(t))&=&h(N(0))+ \int_0^t\int_{R_+} \Big\{ \underset{\alpha \in \mathcal{A}}{ \sum} \Big(
h(N({s^-})+e_{\alpha b_1})\mathbf{1}_{0<\theta-\mathbf{1}_{\alpha=A}(f_{a}N_a(s^-)+d_{a}^K(N({s^-}))\leq b^K_{\alpha b_1}(N({s^-}))}\nonumber \\
&&\hspace{1cm}+h(N({s^-})+e_{\alpha b_2})\mathbf{1}_{b^K_{\alpha b_1}(N({s^-}))<\theta-\mathbf{1}_{\alpha=A}(f_{a}N_a(s^-)+d_{a}^K(N({s^-}))\leq f_\alpha N_\alpha({s^-})}\nonumber \\
&&\hspace{1cm}+h(N({s^-})-e_{\alpha b_1})\mathbf{1}_{0<\theta-f_\alpha N_\alpha(s^-)- \mathbf{1}_{\alpha=A}(f_{a}N_a(s^-)+d_{a}^K(N({s^-}))\leq d_{\alpha b_1}^K(N({s^-}))}\nonumber \\
&&\hspace{1cm}+h(N({s^-})-e_{\alpha b_2})\mathbf{1}_{d^K_{\alpha b_1}(N({s^-}))<\theta-f_\alpha N_\alpha(s^-)- \mathbf{1}_{\alpha=A}(f_{a}N_a(s^-)+d_{a}^K(N({s^-}))\leq d_{\alpha }^K(N({s^-}))}\Big)\nonumber\\
&&\hspace{1cm}- h(N({s^-}))\mathbf{1}_{ \theta \leq f_{a}N_a(s^-)+d_a^K(N({s^-}))+f_{A}N_A(s^-) +d_{A}^K(N({s^-}))}\Big\} Q(ds,d\theta).
\end{eqnarray}
Let us introduce the functions
$\mu^\alpha_{K}$ defined for $\alpha \in \mathcal{A}$ and $(s,\theta)$ in $ \R_+ \times \R_+$ by,
\begin{eqnarray} \label{defmua}\mu^\alpha_{K}(N,s,\theta)&=&\frac{\mathbf{1}_{N_\alpha(s)\geq 1}N_{\alpha b_2}(s)}{(N_{\alpha }(s)+1)N_{\alpha }(s)}
\mathbf{1}_{0<\theta-\mathbf{1}_{\alpha=A}(f_{a}N_a(s)+d_{a}^K(N({s}))\leq b^K_{\alpha b_1}(N({s}))}\\
&&- \frac{\mathbf{1}_{N_\alpha(s)\geq 1}N_{\alpha b_1}(s)}{(N_{\alpha}(s)+1)N_{\alpha}(s)}
\mathbf{1}_{b^K_{\alpha b_1}(N({s}))<\theta-\mathbf{1}_{\alpha=A}(f_{a}N_a(s)+d_{a}^K(N({s}))\leq f_\alpha N_\alpha({s})}\nonumber\\
&&- \frac{\mathbf{1}_{N_\alpha(s)\geq 2}N_{\alpha b_2}(s)}{(N_{\alpha}(s)-1)N_{\alpha}(s)}
\mathbf{1}_{0<\theta-f_\alpha N_\alpha(s)- \mathbf{1}_{\alpha=A}(f_{a}N_a(s)+d_{a}^K(N({s}))\leq d_{\alpha b_1}^K(N({s}))}\nonumber\\
&&+\frac{\mathbf{1}_{N_\alpha(s)\geq 2}N_{\alpha b_1}(s)}{(N_{\alpha}(s)-1)N_{\alpha}(s)}
\mathbf{1}_{d^K_{\alpha b_1}(N({s}))<\theta-f_\alpha N_\alpha(s)- \mathbf{1}_{\alpha=A}(f_{a}N_a(s)+d_{a}^K(N({s}))\leq d_{\alpha }^K(N({s}))}.\nonumber
\end{eqnarray}
Then we can represent the neutral allele proportions $P_{\alpha,b_1}$ as,
\begin{equation} \label{ecripbiamu} P_{\alpha,b_1}(t)=P_{\alpha,b_1}(0)+\int_0^t \int_{0}^\infty \mu^\alpha_{K}(N,s^-,\theta)Q(ds,d\theta),\quad t\geq 0. \end{equation}
A direct calculation gives
$$ \int_0^\infty \mu^\alpha_{K}(N,s,\theta)d\theta=r_Kf_Af_a {\mathbf{1}_{\{N_\alpha(s)\geq 1\}}}\frac{N_{\bar{\alpha}b_1}(s)N_{\alpha b_2}(s)-N_{\alpha b_1}(s)N_{\bar{\alpha}b_2}(s)}{(N_{\alpha}(s)+1)(f_A N_{A}(s)+f_a N_{a}(s))} .$$
Thus if we introduce the compensated Poisson measure $\tilde{Q}(ds,d\theta):={Q}(ds,d\theta)-dsd\theta$, then
\begin{multline*} M_\alpha(t):= \int_0^t \int_0^\infty \mu^\alpha_{K} (N,s^-,\theta)\tilde{Q}(ds,d\theta)
\\= P_{\alpha,b_1}(t)-P_{\alpha,b_1}(0)-r_Kf_A f_a
\int_0^{t }{\mathbf{1}_{\{N_\alpha(s)\geq 1\}}} \frac{N_{\bar{\alpha}b_1}(s)N_{\alpha b_2}(s)-N_{\alpha b_1}(s)N_{\bar{\alpha}b_2}(s)}{(N_{\alpha}(s)+1)(f_A N_{A}(s)+f_a N_{a}(s))}ds
\end{multline*}
is a local martingale. By construction the process $P_{\alpha,b_1}$ has values in $[0,1]$ and as $r_K\leq 1$,
\begin{equation}\label{boundmart} \underset{s\leq t}{\sup}\ \Big| r_Kf_A f_a \int_0^{s} {\mathbf{1}_{\{N_\alpha\geq 1\}}}
\frac{ N_{\bar{\alpha}b_1}N_{\alpha b_2}-N_{\alpha b_1}N_{\bar{\alpha}b_2}}{(N_{\alpha}+1)(f_A N_{A}+f_a N_{a})} \Big|\leq r_K f_\alpha t \leq f_\alpha t ,\quad t\geq 0.\end{equation}
Thus $M_\alpha$ is a square integrable pure jump martingale bounded on every finite interval with quadratic variation
\begin{eqnarray}\label{crochet} \langle M_\alpha \rangle_{t} &=&
\int_0^{t} \int_0^\infty\Big( \mu^\alpha_{K}(N,s,\theta)\Big)^2dsd\theta \nonumber\\
&=& \int_0^{t}\Big\{ P_{\alpha,b_1}(1-P_{\alpha,b_1}) \Big[\Big(D_\alpha+\frac{C_{\alpha, A}}{K}N_{A}+\frac{C_{\alpha,a}}{K}N_{a}\Big) \frac{\mathbf{1}_{N_\alpha\geq 2}N_\alpha}{(N_\alpha-1)^2} \nonumber \\
&&\hspace{.5cm} + \frac{f_\alpha N_\alpha}{(N_{\alpha}+1)^2}\Big] +
r_Kf_A f_{a}{\mathbf{1}_{\{N_\alpha\geq 1\}}}\frac{(N_{\bar{\alpha}b_1}N_{\alpha b_2}-N_{\alpha b_1}N_{\bar{\alpha}b_2}) (1-2P_{\alpha,b_1})}{(N_{\alpha}+1)^2(f_A N_{A}+f_{a} N_{a})}\Big\} .
\end{eqnarray}
This ends the proof of Proposition \ref{mart_prop}. \end{proof}
\begin{rem}
By definition of the functions $\mu^\alpha_K$ in \eqref{defmua} we have for all $(s,\theta)$
in $ \R_+ \times \R_+$,
\begin{equation}\label{muAmua0}
\mu_K^A(N,s,\theta)\mu_K^a(N,s,\theta)=0.
\end{equation}
\end{rem}
Lemma \ref{lemmualpha} states properties of the quadratic variation $\langle M_\alpha \rangle$ widely used in the forthcoming proofs.
We introduce a compact interval containing
the equilibrium size of the $A$-population,
\begin{equation} \label{compact1}I_\eps^K:= \Big[K\Big(\bar{n}_A-2\eps \frac{C_{A,a}}{C_{A,A}}\Big),K\Big(\bar{n}_A+2\eps \frac{C_{A,a}}{C_{A,A}}\Big)\Big]\cap \N, \end{equation}
and the stopping times $T^K_\eps$ and $S^K_\eps$, which denote respectively the hitting time of $\lfloor\eps K \rfloor$ by the mutant population and the exit time of $I_\eps^K$ by the resident population,
\begin{equation} \label{TKTKeps1} T^K_\eps := \inf \Big\{ t \geq 0, N^K_a(t)= \lfloor \eps K \rfloor \Big\},\quad S^K_\eps := \inf \Big\{ t \geq 0, N^K_A(t)\notin I_\eps^K \Big\}. \end{equation}
Finally we introduce a constant depending on $\alpha \in \mathcal{A}$ and $\nu \in \R_+^*$,
\begin{equation}\label{defcalphanu} C(\alpha,v):=4D_\alpha+2f_\alpha+4(C_{\alpha,A}+C_{\alpha,a})\nu. \end{equation}
\begin{lem}\label{lemmualpha}
For $v<\infty$ and $t \geq 0$ such that $(N_A^{(z,K)}(t),N_a^{(z,K)}(t))\in
[0,vK]^2$,
\begin{equation}\label{crocheten1K1}
{\frac{d}{dt}\langle M_\alpha^{(z,K)} \rangle_t }=\int_0^\infty\Big( \mu^\alpha_{K}(N^{(z,K)},t,\theta)\Big)^2d\theta\leq {C(\alpha,v)}\frac{\mathbf{1}_{N_\alpha(t)\geq 1}}{N_\alpha(t)},
\quad \alpha \in \mathcal{A}.
\end{equation}
Moreover, under Assumptions \ref{assumption_eq} and \ref{defrK}, there exist $k_0 \in \N $, $\eps_0>0$ and a pure jump martingale $\bar{M}$ such that for
$\eps\leq \eps_0$ and $t\geq 0$,
\begin{equation} \label{dcrocheta}
e^{\frac{S_{aA}}{2(k_0+1)}t\wedge {T}^K_\eps \wedge S^K_\eps}
\int_0^\infty\Big( \mu^a_{K}(N^{(z^{(K)},K)},t\wedge {T}^K_\eps \wedge S^K_\eps,\theta)\Big)^2d
\theta \leq (k_0+1)C(a,2\bar{n}_A) \bar{M}_{t\wedge {T}^K_\eps \wedge S^K_\eps},
\end{equation}
and
\begin{equation}\label{tildemart}
\E\Big[\bar{M}_{t\wedge {T}^K_\eps \wedge S^K_\eps} \Big]\leq \frac{1}{k_0+1}.
\end{equation}
\end{lem}
\begin{proof}
Equation \eqref{crocheten1K1} is a direct consequence of \eqref{crochet}. To prove \eqref{dcrocheta} and \eqref{tildemart}, let us first notice that
according to Assumption \ref{assumption_eq}, there exists $k_0 \in \N$ such that for $\eps$ small enough and $k \in \Z_+$,
$$\frac{f_a(k_0+k-1)-(D_a+C_{a,A}\bar{n}_A+\eps(C_{a,a}+2C_{A,a}C_{a,A}/C_{A,A}))(k_0+k+1)}{k_0+k-1}\geq \frac{S_{aA}}{2}.$$
This implies in particular that for every $t < {T}^K_\eps \wedge S^K_\eps$,
\begin{equation}\label{k0} \frac{
f_a N_a(t)(N_a(t)+k_0-1)-d_a^K(N(t))(N_a(t)+k_0+1)}{(N_a(t)+k_0-1)(N_a(t)+k_0+1)}\geq \frac{S_{aA}N_a(t)}{2(N_a(t)+k_0+1)}\geq \frac{S_{aA}\mathbf{1}_{N_a(t)\geq 1}}{2(k_0+1)},\end{equation}
where the death rate $d_a^K$ has been defined in \eqref{defdaba}. For sake of simplicity let us introduce the process $X$ defined as follows:
$$ {X(t)=\frac{1}{N_a({t})+k_0}\exp\Big(\frac{S_{aA}t}{2(k_0+1)}\Big),\quad \forall t\geq 0.} $$
Applying Itô's formula with jumps we get for every $t\geq 0$:
\begin{multline} \label{itotilde}
X({t\wedge {T}^K_\eps \wedge S^K_\eps})=\bar{M}(t\wedge {T}^K_\eps \wedge S^K_\eps)+\\
\int_0^{t\wedge {T}^K_\eps \wedge S^K_\eps}\Big(\frac{S_{aA}}{2(k_0+1)}- \frac{f_a N_a(s)(N_a(s)+k_0-1)-d_a^K(N(s))(N_a(s)+k_0+1)
}{(N_a(s)+k_0-1)(N_a(s)+k_0+1)}\Big)X(s)ds,
\end{multline}
where the martingale $\bar{M}$ has the following expression:
\begin{multline} \label{exprtildeM} \bar{M}(t)=\frac{1}{k_0+1}+\int_0^t\int_{\R_+}\tilde{Q}(ds,d\theta)\exp\Big(\frac{S_{aA}s}{2(k_0+1)}\Big)\\
\Big[
\frac{ \mathbf{1}_{\theta \leq f_aN_a(s^-)} }{N_a(s^-)+k_0+1}
+\frac{ \mathbf{1}_{ f_aN_a(s^-)<\theta \leq f_aN_a(s^-)+d_a(N(s^-))} }{N_a(s^-)+k_0-1}
-\frac{\mathbf{1}_{\theta \leq f_aN_a(s^-)+d_a(N(s^-))} }{N_a(s^-)+k_0}
\Big] .\end{multline}
Thanks to \eqref{k0} the integral in \eqref{itotilde} is nonpositive. Moreover, according to \eqref{crocheten1K1}, for $t \leq {T}^K_\eps \wedge
S^K_\eps$,
as $2\eps{C_{A,a}}/{C_{A,A}}\leq \bar{n}_A$ for $\eps$ small enough,
\begin{multline} \int_0^\infty\Big( \mu^a_{K}(N^{(z^{(K)},K)},t,\theta)\Big)^2d
\theta \leq C(a,2\bar{n}_A)\frac{\mathbf{1}_{N_a(t)\geq 1}}{N_a(t)}\\
\leq (k_0+1)C(a,2\bar{n}_A)X(t)\exp\Big(-\frac{S_{aA}t}{2(k_0+1)}\Big) ,\end{multline}
which ends the proof.\end{proof}
\section{Proof of Theorem \ref{main_result2}}\label{proofstand}
In this section we suppose that Assumption \ref{assumption_eq} holds. For $\eps\leq C_{a,a}/C_{a,A} \wedge 2|S_{Aa}|/C_{A,a}$ and $z$
in $\R_+^{A\times \mathcal{B}}\times (\R_+^{a\times \mathcal{B}} \setminus (0,0) )$ we introduce a deterministic time $t_{\eps}(z)$ after which the solution $(n_A^{(z)},n_a^{(z)})$ of the dynamical system \eqref{S1} is close to the stable equilibrium $(0,\bar{n}_a)$:
\begin{equation}\label{deftepsz1} t_{\eps}(z):=\inf \big\{ s \geq 0,\forall t \geq s, ({n}_A^{(z)}(t),{n}_a^{(z)}(t))\in
[0,\eps^2/2]\times[\bar{n}_a-\eps/2,\infty) \big\}. \end{equation}
Once $(n_A^{(z)},n_a^{(z)})$ has reached the set $[0,\eps^2/2]\times[\bar{n}_a-\eps/2,\infty)$ it
never escapes from it. Moreover, according to Assumption \ref{assumption_eq} on
the stable equilibrium, $t_\eps(z)$ is finite.\\
First we compare the population process with the four dimensional dynamical system \eqref{syst_dyn} on the
time interval $[0, t_\eps(z)]$. Then we study this dynamical system and get an approximation of
the neutral proportions at time $t_\eps(z)$. Finally, we state that
during the A-population extinction period, this proportion stays nearly constant.
\subsection{Comparison with a four dimensional dynamical system}
Recall that $n^{(z,K)}=(n^{(z,K)}_{\alpha \beta},(\alpha,\beta)\in \mathcal{E})$ is the solution of the dynamical system \eqref{syst_dyn}
with initial condition $z$. Then we have the following comparison result:
\begin{lem}\label{lemapprox}
Let $z$ be in $\R_+^\mathcal{E}$ and $\eps$ be in $\R_+^*$. Then
\begin{equation}\label{EK2}
\underset{K \to \infty}{\lim}\ \sup_{s\leq t_\eps(z)}\ \|{N}^{(z,K)}(s)/K-n^{(z,K)}(s) \|=0 \quad \text{in probability}
\end{equation}
where $\|. \|$ denotes the $L^1$-Norm on $\R^\mathcal{E}$.
\end{lem}
\begin{proof}
The proof relies on a slight modification of Theorem 2.1 p. 456 in Ethier and Kurtz \cite{EK}.
According to \eqref{death_rate} and \eqref{birth_rate}, the rescaled birth and death rates
\begin{equation}\label{deftildeb} \tilde{b}_{\alpha \beta}^K(n)=\frac{1}{K}b_{\alpha \beta}^K(Kn)=f_\alpha n_{\alpha\beta} + r_K f_a f_{A}\frac{n_{\bar{\alpha}\beta}n_{\alpha\bar{\beta}}
-n_{\alpha\beta}n_{\bar{\alpha}\bar{\beta}}}{f_A n_{A}+f_a n_{a}}, \quad (\alpha, \beta) \in \mathcal{E}, n \in N^\mathcal{E}, \end{equation}
and
\begin{equation}\label{deftilded} \tilde{d}_{\alpha \beta}(n)=\frac{1}{K}d_{\alpha \beta}^K(Kn)=\left[ D_\alpha+C_{\alpha,A}n_{A}+C_{\alpha,a}n_{a}\right]{n_{\alpha\beta}},
\quad (\alpha, \beta) \in \mathcal{E}, n \in N^\mathcal{E}, \end{equation}
are Lipschitz and bounded on every compact subset of $ \N^\mathcal{E}$. The only difference with \cite{EK} is that $ \tilde{b}_{\alpha \beta}^K$
depends on $K$ via the term $r_K$.
Let $(Y_i^{(\alpha\beta)},i \in \{1,2\}, (\alpha,\beta)\in \mathcal{E})$ be eight independent standard Poisson processes.
From the representation of the population process $N^{(z,K)}$ in \eqref{defN} we see that the process $(\bar{N}^{(z,K)}(t), t \geq 0)$ defined by
\begin{equation*}
\bar{N}^{(z,K)}(t) =\lfloor zK\rfloor+\underset{(\alpha,\beta)\in \mathcal{E}}{\sum}\Big[{Y}_1^{(\alpha\beta)}\Big( \int_0^t{b}_{\alpha \beta}^K(\bar{N}^{(z,K)}({s})) ds\Big)-
{Y}_2^{(\alpha\beta)}\Big( \int_0^t{d}_{\alpha \beta}^K(\bar{N}^{(z,K)}({s})) ds\Big)\Big],
\end{equation*}
has the same law as $({N}^{(z,K)}(t), t \geq 0)$.
Applying Definitions \eqref{deftildeb} and \eqref{deftilded} we get:
$$\frac{ \bar{N}^{(z,K)}(t)}{K} =\frac{\lfloor zK\rfloor}{K}+Mart^{(z,K)}(t)+\int_0^t \underset{(\alpha,\beta)\in \mathcal{E}}{\sum}
e_{\alpha \beta}\Big( \tilde{b}_{\alpha \beta}^K\Big(\frac{ \bar{N}^{(z,K)}({s})}{K}\Big) -\tilde{d}_{\alpha \beta}\Big(\frac{ \bar{N}^{(z,K)}({s})}{K}\Big) \Big)ds, $$
where we recall that $(e_{\alpha \beta}, (\alpha,\beta)\in\mathcal{E})$ is the canonical basis of $\R_+^\mathcal{E}$ and the martingale $Mart^{(z,K)}$ is defined by
$$Mart^{(z,K)} := \frac{1}{K}\underset{(\alpha,\beta)\in \mathcal{E}}{\sum}\Big[\tilde{Y}_1^{(\alpha\beta)}\Big(K \int_0^t\tilde{b}_{\alpha \beta}^K\Big(\frac{ \bar{N}^{(z,K)}({s})}{K}\Big) ds\Big)-
\tilde{Y}_2^{(\alpha\beta)}\Big(K \int_0^t\tilde{d}_{\alpha \beta}\Big(\frac{ \bar{N}^{(z,K)}({s})}{K}\Big) ds\Big)\Big]$$
and $(\tilde{Y}_i^{(\alpha\beta)}(u)={Y}_i^{(\alpha\beta)}(u)-u,u \geq 0,i \in \{1,2\}, (\alpha,\beta)\in \mathcal{E})$ are the Poisson processes centered at their expectation.
We also have by definition
$$n^{(z,K)}(t)= z+\int_0^t \underset{(\alpha,\beta)\in \mathcal{E}}{\sum}
e_{\alpha \beta}\Big( \tilde{b}_{\alpha \beta}^K(n^{(z,K)}({s})) -\tilde{d}_{\alpha \beta}(n^{(z,K)}({s})) \Big)ds. $$
Hence, for every $t\leq t_\eps(z)$,
\begin{multline*}
\Big|\frac{\bar{N}^{(z,K)}(t)}{K} -n^{(z,K)}(t)\Big|\leq \Big|\frac{\lfloor zK \rfloor}{K}-z \Big|+ \Big|Mart^{(z,K)}(t)\Big| \\
+
\int_0^t \underset{(\alpha,\beta)\in \mathcal{E}}{\sum}\Big|
\Big( \tilde{b}_{\alpha \beta}^K -\tilde{d}_{\alpha \beta}\Big)\Big(\frac{\bar{N}^{(z,K)}({s})}{K}\Big) -
\Big( \tilde{b}_{\alpha \beta}^K -\tilde{d}_{\alpha \beta}\Big)\Big(n^{(z,K)}({s})\Big) \Big|ds ,
\end{multline*}
and there exists a finite constant $\mathcal{K}$ such that
\begin{equation*}
\Big|\frac{\bar{N}^{(z,K)}(t)}{K} -n^{(z,K)}(t)\Big|\leq \frac{1}{K}+ \Big|Mart^{(z,K)}(t)\Big|
+ \mathcal{K}
\int_0^t \Big|
\frac{\bar{N}^{(z,K)}({s})}{K} - n^{(z,K)}({s}) \Big|ds.
\end{equation*}
But following Ethier and Kurtz, we get
$$ \underset{K \to \infty}{\lim}\underset{s\leq t_\eps(z)}{\sup}|Mart^{(z,K)}|=0, \quad \text{a.s.}, $$
and using Gronwall's Lemma we finally obtain
\begin{equation*}
\underset{K \to \infty}{\lim}\ \sup_{s\leq t_\eps(z)}\ \|\bar{N}^{(z,K)}(s)/K-n^{(z,K)}(s) \|=0 \quad \text{a.s.}
\end{equation*}
As the convergence in law to a constant is equivalent to the convergence in probability to the same constant, the result follows.
\end{proof}
Once we know that the rescaled population process is close to the solution of the dynamical system \eqref{syst_dyn},
we can study the dynamical system.
\begin{lem}\label{lemstudysd}
Let $z$ be in $\R^{\mathcal{E}}_+$ such that $z_A>0$ and $z_a>0$. Then $n_a^{(z,K)}(t)$ and $n^{(z,K)}_{ab_1}(t)$ have a finite limit when
$t$ goes to infinity, and
there exists a positive constant $\eps_0$ such that for every $\eps\leq \eps_0$,
\begin{equation*}
\Big| \frac{n^{(z,K)}_{ab_1}(\infty)}{n^{(z,K)}_{a}(\infty)}- \frac{n^{(z,K)}_{ab_1}(t_\eps(z))}{n^{(z,K)}_{a}(t_\eps(z))} \Big|
\leq \frac{2f_a\eps^2}{\bar{n}_A|S_{aA}|}.
\end{equation*}
\end{lem}
\begin{proof}
First notice that by definition of the dynamical systems \eqref{S1} and \eqref{syst_dyn}, $n_\alpha^{(z,K)}=n_\alpha^{(z)}$ for
$\alpha \in \mathcal{A}$ and $z \in \R^{\mathcal{E}}$.
Assumption \ref{assumption_eq} ensures that $n_a^{(z)}(t)$ goes to $\bar{n}_a$ at infinity. If we define the functions
$$p_{\alpha,b_1}^{(z,K)}= {n^{(z,K)}_{\alpha b_1}}/{{n}^{(z)}_{\alpha}}, \quad \alpha \in \mathcal{A},\quad
\text{and} \quad g^{(z,K)}=p_{A,b_1}^{(z,K)}-p_{a,b_1}^{(z,K)},$$
we easily check that $\phi:(n_{Ab_1}^{(z,K)},n_{Ab_2}^{(z,K)},n_{ab_1}^{(z,K)},n_{ab_2}^{(z,K)})\mapsto (n_A^{(z)},n_a^{(z)},
g^{(z,K)},p_{a,b_1}^{(z,K)})$ defines a change of variables from $(\R_+^{*})^\mathcal{E}$ to
$\R_+^{2*} \times ]-1,1[\times]0,1[$, and \eqref{syst_dyn} is equivalent to:
\begin{equation} \label{syst_dyn2}
\left\{ \begin{array}{l}
\dot{n}^{(z)}_{\alpha} =( f_\alpha -(D_\alpha+C_{\alpha,A}{n}^{(z)}_{A}+C_{\alpha,a}{n}^{(z)}_{a})){n}^{(z)}_{\alpha},\quad \alpha \in \mathcal{A} \\
\dot{g}^{(z,K)} =-g^{(z,K)}\Big(r_K f_Af_a(n^{(z)}_A+n^{(z)}_a)/(f_An^{(z)}_A+f_an^{(z)}_a)\Big) \\
\dot{p}_{a,b_1}^{(z,K)} =g^{(z,K)}\Big(r_K f_Af_an^{(z)}_A/(f_An^{(z)}_A+f_an^{(z)}_a)\Big) ,\end{array} \right.
\end{equation}
with initial condition
$$(n_A^{(z)}(0),n_a^{(z)}(0),
g^{(z,K)}(0),p_{a,b_1}^{(z,K)}(0))=(z_A,z_a,z_{Ab_1}/z_A-z_{ab_1}/z_a ,z_{ab_1}/z_a).$$
Moreover, a direct integration yields
\begin{equation}\label{expreproba} {p^{(z,K)}_{a,b_1}(t)}= {p^{(z,K)}_{a,b_1}(0)}-( {p_{a,b_1}^{(z,K)}(0)}-{p_{A,b_1}^{(z,K)}(0)} )F(z,r_K,t),\end{equation}
where $F$ has been defined in \eqref{defF}. According to \eqref{limF}, $F(z,r_K,t)$ has a finite limit when $t$ goes to infinity. Hence $p^{(z,K)}_{a,b_1}$
also admits a limit at infinity.
Let $\eps\leq |S_{Aa}|/C_{A,a}\wedge C_{a,a}/ C_{A,a}\wedge \bar{n}_a/2$, and $t_\eps(z)$ defined in \eqref{deftepsz1}. Then for $t \geq t_\eps(z)$,
$$ \dot{n}_A^{(z)}(t) \leq (f_A-D_A-C_{Aa}(\bar{n}_a-\eps/2))n_A^{(z)}(t) \leq S_{Aa} {n}_A^{(z)}(t)/2<0.$$
Recalling that $r_K\leq 1$ and $|g(t)|\leq 1$ for all
$t\geq 0$ we get:
\begin{equation} \label{diff_lim}\Big|p^{(z,K)}_{a,b_1}(\infty)-p^{(z,K)}_{a,b_1}(t_\eps(z))\Big|\leq
\int_{t_\eps(z)}^\infty \frac{f_Af_an^{(z)}_A}{f_An^{(z)}_A+f_an^{(z)}_a}\leq \frac{f_A\eps^2}{\bar{n}_a-\eps/2}\int_0^\infty e^{S_{Aa}s/2}ds
\leq \frac{2f_a\eps^2}{(\bar{n}_a-\eps/2)|S_{aA}|},\end{equation}
which ends the proof.\end{proof}
\subsection{$A$-population extinction}
The deterministic approximation \eqref{syst_dyn} fails when the $A$-population size becomes too small. We shall compare
$N_A$ with birth and death processes to study the last period of the mutant invasion. We show that during this period, the number of $A$ individuals is so small
that it has no influence on the neutral proportion in the $a$-population, which stays nearly
constant.
Before stating the result, we recall Definition \eqref{defText} and introduce the compact set $\Theta$:
\begin{equation}\label{defTheta}
\Theta:=\big\{ z \in \R_+^{A\times \mathcal{B}} \times \R_+^{a\times \mathcal{B}}, z_A\leq \eps^2 \quad \text{and} \quad |z_a-\bar{n}_a |\leq \eps \big\},
\end{equation}
the constant $M''=3+(f_a+C_{a,A})/C_{a,a}$, and the stopping time:
\begin{equation}\label{SKeps}
{U}^K_\eps(z):=\inf \Big\{ t\geq 0, N^{(z,K)}_{A}(t)> \eps K \text{ or } |N^{(z,K)}_a(t)-\bar{n}_a K|>M''\eps K \Big\}.
\end{equation}
\begin{lem}\label{third_step}
Let $z$ be in $\Theta$. Under Assumption \ref{assumption_eq}, there exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$,
$$ \underset{K \to \infty}{\limsup} \ \P\Big(\underset{t \leq T^{(z,K)}_{\textnormal{ext}}}{\sup}\Big|P_{a,b_1}^{(z,K)}(t)-P_{a,b_1}^{(z,K)}(0)\Big|>\eps\Big)\leq c\eps. $$
\end{lem}
\begin{proof}
Let $z $ be in $\Theta$ and $Z^1$ be a birth and death process with individual birth rate $f_A$, individual death rate
$D_A+(\bar{n}_a-M''\eps)C_{A,a}$, and initial state $\lceil \eps^2 K \rceil$.
Then on $[0,U_\eps^K(z)[$, $N_A$ and $Z^1$ have the same birth rate,
and $Z^1$ has a smaller death rate than $N_A$. Thus according to Theorem 2 in \cite{champagnat2006microscopic},
we can construct the processes $N$ and $Z^1$ on the same probability space such that:
\begin{equation}\label{compaZ1} N_A(t)\leq Z^1_t, \quad \forall t < U_\eps^K(z). \end{equation}
Moreover, if we denote by $T_0^1$ the extinction time of $Z^1$, $ T_0^1:=\inf\{ t\geq 0, Z_t^1=0 \} ,$ and recall that
\begin{equation}\label{decroissance} f_A-D_A-(\bar{n}_a-M''\eps)C_{A,a}=S_{Aa}+M''C_{A,a}\eps<S_{Aa}/2<0, \quad \forall \eps <|S_{Aa}|/(2M''C_{Aa}), \end{equation}
we get according to (\ref{ext_times}) that for $z\leq \eps^2$ and
$$L(\eps,K)=2 \log K/|S_{Aa}+M''\eps C_{A,a}|,$$
\begin{eqnarray*}
\P_{\lceil z K \rceil}\Big(T_0^1\leq L(\eps,K) \Big) \geq \exp\Big( \lceil \eps^2 K \rceil \Big[\log (K^2-1)-\log(K^2-f_A(D_A+(\bar{n}_a-M''\eps)C_{A,a})^{-1}) \Big] \Big).
\end{eqnarray*}
Thus:
\begin{equation} \label{sim}
\lim_{K \to \infty}\P_{\lceil z K \rceil}\Big(T_0^1< L(\eps,K) \Big)= 1.
\end{equation}
Moreover, Equation \eqref{resSepsKz} ensures the existence of a finite $c$ such that for $\eps$ small enough,
\begin{equation} \label{LS}
\P \Big(L(\eps,K)<U_\eps^{K}(z) \Big)\geq 1-c\eps. \end{equation}
Equations \eqref{sim} and \eqref{LS} imply
\begin{equation} \label{LS2}
\underset{K \to \infty}{\liminf} \ \P \Big(T_0^1< L(\eps,K)<U_\eps^{K}(z) \Big)\geq 1-c\eps \end{equation}
for a finite $c$ and $\eps$ small enough. According to Coupling \eqref{compaZ1} we have the inclusion
$$\{T_0^1< L(\eps,K)<U_\eps^{K}(z)\}\subset\{T^K_{ext}< L(\eps,K)<U_\eps^{K}(z)\}.$$
Adding \eqref{LS2} we finally get:
\begin{equation} \label{LS3}
\underset{K \to \infty}{\liminf}\ \P(T_{ext}^K< L(\eps,K)<U_\eps^{K}(z))\geq 1-c\eps. \end{equation}
Recall the martingale decomposition of $P_{a,b_1}$ in \eqref{defM}. To bound the difference $|P_{a,b_1}(t)-P_{a,b_1}(0)|$ we bound independently
the martingale $M_a(t)$ and the integral $|P_{a,b_1}(t)-P_{a,b_1}(0)-M_a(t)|$.
On one hand Doob's Maximal Inequality and Equation \eqref{crocheten1K1} imply:
\begin{eqnarray}\label{maj_mart} \P\Big(\underset{t \leq L(\eps,K) \wedge U_\eps^K }{\sup}|M_a(t)|>\frac{{\varepsilon}}{2} \Big)
& \leq & \frac{4}{\varepsilon^2}\E\Big[ \langle M_a\rangle_{ L(\eps,K)\wedge U_\eps^{K}(z)} \Big] \nonumber \\
& \leq & \frac{4 C(a,\bar{n}_a+M''\eps)L(\eps,K)}{\eps^2K(\bar{n}_a-M''\eps)}\nonumber\\
& = & \frac{8 C(a,\bar{n}_a+M''\eps)\log K}{\eps^2K(\bar{n}_a-M''\eps)|S_{Aa}+M''\eps C_{A,a}|}. \end{eqnarray}
On the other hand the inequality $|N_{Ab_1}N_{a b_2}-N_{a b_1}N_{Ab_2}|\leq N_AN_a$ yields for $t \geq 0$
\begin{eqnarray*} \Big| \int_0^{t\wedge U_\eps^K(z)}\frac{r_Kf_Af_a (N_{Ab_1}N_{a b_2}-N_{a b_1}N_{Ab_2})}
{(N_a+1)(f_AN_A+f_aN_a)} \Big| \leq \int_0^{{t\wedge U_\eps^K(z)}} \frac{f_A N_A}{(\bar{n}_a- \eps M'')K}.
\end{eqnarray*}
Hence decomposition \eqref{defM}, Markov's Inequality, and Equations \eqref{compaZ1}, \eqref{espviemort} and \eqref{decroissance} yield
\begin{equation} \label{maj_int} \P\Big(\Big|(P_{a,b_1}-M_a)(t\wedge U_\eps^K(z))-P_{a,b_1}(0)\Big|>\frac{\eps}{2}\Big)\leq
\frac{ 2f_A\eps^2}{\eps(\bar{n}_a- \eps M'')} \int_0^{t} e^{\frac{S_{Aa}s}{2}}ds
\leq \frac{4f_A\eps}{(\bar{n}_a- \eps M'')|S_{Aa}|}.
\end{equation}
Taking the limit of \eqref{maj_mart} when $K$ goes to infinity and adding \eqref{maj_int} end the proof.
\end{proof}
\subsection{End of the proof of Theorem \ref{main_result2}}
Recall Definitions \eqref{defText} and \eqref{deftepsz1}. We have:
\begin{multline*} \Big|P_{a,b_1}^{(z,K)}(T_{\text{ext}}^{(z,K)})-p_{a,b_1}^{(z,K)}(\infty)\Big| \leq \Big|P_{a,b_1}^{(z,K)}(T_{\text{ext}}^{(z,K)})-P_{a,b_1}^{(z,K)}(t_\eps(z))\Big|+\\
\Big|P_{a,b_1}^{(z,K)}(t_\eps(z))-p_{a,b_1}^{(z,K)}(t_\eps(z))\Big|+\Big|p_{a,b_1}^{(z,K)}(t_\eps(z))-p_{a,b_1}^{(z,K)}(\infty)\Big|. \end{multline*}
To bound the two last terms we use respectively Lemmas \ref{lemapprox} and \ref{lemstudysd}. For the first term
of the right hand side,
\eqref{result_champa1} ensures that with high probability, $N^{(z,K)}(t_\eps(z)) \in \Theta$ and $t_\eps(z)<T_{\text{ext}}^{(z,K)}$. Lemma \ref{third_step},
Equation \eqref{convfixcas1} and Markov's Inequality allow us to conclude that for $\eps$ small enough
$$ \underset{K \to \infty}{\limsup}\ \P(\mathbf{1}_{\text{Fix}^{(z,K)}}|P_{a,b_1}^{(z,K)}(T_{\text{ext}}^{(z,K)})-p_{a,b_1}^{(z,K)}(\infty)|>3\eps)\leq c{\eps}, $$
for a finite $c$, which is equivalent to the convergence in probability. Adding \eqref{expreproba} completes the proof.
\section{A coupling with two birth and death processes} \label{section_couplage}
In Sections \ref{proofstrong} and \ref{proofweak} we suppose that Assumptions \ref{assumption_eq} and \ref{defrK} hold and we denote by $N^K$ the process $N^{(z^{(K)},K)}$.
As it will appear in the proof of Theorem \ref{main_result} the first period of mutant invasion, which ends at time $T_\eps^K$ when the mutant population size
hits $\lfloor \eps K \rfloor$, is the most important for the neutral proportion dynamics. Indeed, the neutral proportion in the $a$-population has already reached its final value at time $T_\eps^K$. Let us describe a coupling of the process $N_a^K$ with two birth and death processes which will be a key argument to control
the growing of the population $a$ during the first period. We recall Definition \eqref{TKTKeps1} and define for $\eps < S_{aA}/( 2 {C_{a,A}C_{A,a}}/{C_{A,A}}+C_{a,a} )$,
\begin{equation}\label{def_s_-s_+1}
s_-(\eps):=\frac{S_{aA}}{f_a}-\eps\frac{ 2 {C_{a,A}C_{A,a}}+C_{a,a}{C_{A,A}}}{f_a{C_{A,A}}}, \quad \text{and} \quad s_+(\eps):=\frac{S_{aA}}{f_a}
+ 2\varepsilon \frac{C_{a,A}C_{A,a}}{f_aC_{A,A}}.
\end{equation}
Definitions
\eqref{defdaba} and \eqref{deffitinv1} ensure that for $t < T_\eps^K \wedge S_\eps^K$,
\begin{equation}\label{ineqtxmort}
f_a (1-s_+(\eps))\leq \frac{{d}_{a}^K(N^K(t))}{N_a^K(t)}= f_a-S_{aA}+\frac{C_{a,A}}{K}(N^K_A(t)-\bar{n}_AK)+\frac{C_{a,a}}{K}N^K_a(t)\leq f_a (1-s_-(\eps)),
\end{equation}
and following Theorem 2 in \cite{champagnat2006microscopic}, we can construct on the same probability space the processes
$Z^-_\eps$, $N^K$ and $Z^+_\eps$ such that almost surely:
\begin{equation}\label{couplage11}
Z^-_\eps(t) \leq N_a^K(t) \leq Z^+_\eps(t), \quad \text{for all } t < T^K_\eps\wedge S^K_\eps ,
\end{equation}
where for $*\in \{-,+\}$, $Z^*_\eps$ is a birth and death process with initial state $1$, and individual birth and death rates $f_a$ and $f_a (1-s_*(\eps))$.
\section{Proof of Theorem \ref{main_result} in the strong recombination regime}\label{proofstrong}
In this section, we suppose that Assumptions \ref{assumption_eq}, \ref{defrK} and \ref{condstrong} hold.
We distinguish the three periods of the selective sweep: (i) rare mutants
and resident population size near its equilibrium value, (ii) quasi-deterministic period governed by the dynamical system \eqref{S1}, and (iii)
$A$-population extinction. First we prove that at time $T_\eps^K$ proportions of $b_1$ alleles in the populations $A$ and $a$ are close to $z_{Ab_1}/z_A$.
Once the neutral proportions are the same in the two populations, they do not evolve anymore until the end of the sweep.
\begin{lem}\label{majespP}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$:
\begin{equation*}
\underset{K \to \infty}{\limsup} \ \E \Big[\mathbf{1}_{ T_\eps^K\leq S_\eps^K}\Big\{\Big|P_{A,b_1}^K( T_\eps^K)-\frac{ z_{Ab_1}}{ z_A} \Big|
+\Big|P_{A,b_1}^K( T_\eps^K)-P_{a,b_1}^K( T_\eps^K) \Big| \Big\}\Big]
\leq c\eps.
\end{equation*}
\end{lem}
\begin{proof}First we bound the difference between the neutral proportions in the two populations, $|P_{a,b_1}(t)-P_{A,b_1}(t)| $,
then we bound $|P_{A,b_1}(t)-{z_{Ab_1}}/{z_A}|$.
For sake of simplicity we introduce:
\begin{equation} \label{defG} G(t):=P_{A,b_1}(t)-P_{a,b_1}(t)=\frac{N_{ab_2}(t)N_{Ab_1}(t)-N_{ab_1}(t)N_{Ab_2}(t)}{N_{A}(t)N_{a}(t)}, \quad \forall t \geq 0 , \end{equation}
\begin{equation} \label{defY} Y(t)=G^2(t)e^{r_K f_at/2}, \quad \forall t \geq 0 .\end{equation}
Recalling \eqref{muAmua0} and applying Itô's formula with jumps we get
\begin{multline}\label{exprY}
Y(t\wedge T_\eps^K\wedge S_\eps^K)=Y(0)+\hat{M}_{t\wedge T_\eps^K\wedge S_\eps^K}
+r_K\int_0^t\mathbf{1}_{s< T_\eps^K\wedge S_\eps^K}\Big(f_a/2-H(s) \Big)Y(s)ds\\
+\int_0^t\mathbf{1}_{s< T_\eps^K\wedge S_\eps^K}e^{r_K f_as/2}ds
\int_{\R_+}\Big[\Big(\mu_A^K(N,s,\theta)\Big)^2+\Big(\mu_a^K(N,s,\theta)\Big)^2\Big]d\theta,
\end{multline}
where $\hat{M}$ is a martingale with zero mean, and $H$ is defined by
\begin{equation} \label{defH}
H(t)=\frac{2f_a f_AN_A(t)N_a(t)}{f_AN_A(t)+f_aN_a(t)}\Big[\frac{1}{N_A(t)+1}+\frac{1}{N_a(t)+1}\Big]\geq \frac{f_a}{2},\quad t < T_\eps^K\wedge S_\eps^K,
\end{equation}
for $\eps$ small enough. In particular the first integral in \eqref{exprY} is non-positive.
Applying Lemma \ref{lemmualpha} we obtain:
\begin{eqnarray}\label{majespY}
\E[Y(t\wedge T_\eps^K\wedge S_\eps^K)]&\leq & 1+ \frac{2C(A,2\bar{n}_A)}{r_K f_a(\bar{n}_A-2\eps C_{A,a}/C_{A,A})K}
e^{\frac{r_K f_at}{2}} \nonumber \\
&& +\int_0^t (k_0+1)C(a,2\bar{n}_A)\E\Big[ \tilde{M}_{s\wedge T_\eps^K\wedge S_\eps^K}\Big]
e^{ (\frac{r_K f_a}{2}-\frac{S_{aA}}{2(k_0+1)} )s}ds \nonumber \\
& \leq & c\Big( 1+ \frac{1}{Kr_K}e^{\frac{r_K f_at}{2}}+e^{ (\frac{r_Kf_a}{2}-\frac{S_{aA}}{2(k_0+1)} )t} \Big),
\end{eqnarray}
where $c$ is a finite constant which can be chosen independently of $\eps$ and $K$ if $\eps$ is small enough and $K$ large enough.
Combining the semi-martingale decomposition \eqref{defM}, the Cauchy-Schwarz Inequality, and Equations \eqref{crocheten1K1} and \eqref{majespY} we get for every $t \geq 0$,
\begin{multline*} \E \Big[\Big|P_{A,b_1}(t\wedge T_\eps^K\wedge S_\eps^K)-\frac{\lfloor z_{Ab_1}K\rfloor}{\lfloor z_A K\rfloor}\Big|\Big]
\\ \leq \E \Big[|M_A(t\wedge T_\eps^K\wedge S_\eps^K)|\Big]+
\frac{r_K f_a\eps }{\bar{n}_A-2\eps C_{A,a}/C_{A,A}} \int_0^t \E \Big[\mathbf{1}_{s< T_\eps^K\wedge S_\eps^K}|G(s)|\Big]ds\\
\leq \E^{1/2} \Big[\langle M_A \rangle_{t\wedge T_\eps^K\wedge S_\eps^K}\Big]
+cr_K\eps \int_0^t \E^{1/2}\Big[Y(s\wedge T_\eps^K\wedge S_\eps^K)\Big]e^{-r_Kf_as/4}ds
\\
\leq c\Big(\sqrt{t/K}+\eps r_K \int_0^t \Big(e^{-r_K f_as/2}+\frac{1}{Kr_K}+e^{-S_{aA}s/2(k_0+1)}\Big)^{1/2}ds\Big),
\end{multline*}
where $c$ is finite. A simple integration then yields the existence of a finite $c$ such that:
\begin{eqnarray} \label{majdiffprop}\E \Big[\Big|P_{A,b_1}(t\wedge T_\eps^K\wedge S_\eps^K)-
\frac{\lfloor z_{Ab_1}K\rfloor}{\lfloor z_A K\rfloor}\Big|\Big]
\leq c\Big(\sqrt{\frac{t}{K}}+\eps\Big(1+ \frac{t}{\sqrt{K}}\Big)\Big).
\end{eqnarray}
Let us introduce the sequences of times
$$t_K^{(-)}=(1-c_1\eps)\frac{\log K}{S_{aA}}, \quad \text{and} \quad t_K^{(+)}=(1+c_1\eps)\frac{\log K}{S_{aA}},$$
where $c_1$ is a finite constant. Then according to Coupling \eqref{couplage11} and limit
\eqref{equi_hitting},
\begin{equation} \label{2slog} \underset{K \to \infty}{\lim} \P(T_\eps^K < t_K^{(-)}|T_\eps^K \leq S_\eps^K)=
\underset{K \to \infty}{\lim} \P(T_\eps^K > t_K^{(+)}|T_\eps^K \leq S_\eps^K)=0.\end{equation}
Hence applying \eqref{majdiffprop} at time $t_K^{(+)}$
and using \eqref{res_champ} and \eqref{2slog}, we bound the first term in
the expectation.
To bound the second term in the expectation, we introduce the notation
$$ A(\eps,K):=\E \Big[\mathbf{1}_{t_K^{(-)} \leq T_\eps^K\leq S_\eps^K\wedge t_K^{(+)}}
\Big|P_{A,b_1}^K( T_\eps^K\wedge S_\eps^K)-P_{a,b_1}^K( T_\eps^K\wedge S_\eps^K) \Big|\Big]. $$
From \eqref{2slog} we obtain
\begin{equation} \label{reducbound2} \underset{K \to \infty}{\limsup} \ \E \Big[\mathbf{1}_{ T_\eps^K\leq S_\eps^K}
\Big|P_{A,b_1}^K( T_\eps^K)-P_{a,b_1}^K( T_\eps^K) \Big|\Big]=\underset{K \to \infty}{\limsup} \ A(\eps,K), \end{equation}
and by using \eqref{defY}, the Cauchy-Schwarz Inequality, and \eqref{majespY} we get
\begin{eqnarray*}
A(\eps,K)&\leq &\E \Big[\sqrt{Y(t_K^{(+)}\wedge T_\eps^K\wedge S_\eps^K)} \Big] e^{-\frac{r_Kf_a}{4}t_K^{(-)}}\\
&\leq &\E^{1/2} \Big[{Y(t_K^{(+)}\wedge T_\eps^K\wedge S_\eps^K)} \Big] e^{-\frac{r_Kf_a}{4}t_K^{(-)}}\\
&\leq & c\Big( 1+ \frac{1}{Kr_K}e^{r_Kf_at_K^{(+)}/2}+e^{ (\frac{r_K f_a}{2}-\frac{S_{aA}}{2(k_0+1)} )t_K^{(+)}} \Big)^{1/2}
e^{-\frac{r_Kf_a}{4}t_K^{(-)}}\\
&\leq & c\Big( e^{-\frac{r_Kf_a}{4}t_K^{(-)}}+ \frac{1}{\sqrt{Kr_K}}e^{\frac{c_1\eps r_Kf_a\log K}{2S_{aA}}}+
e^{ (\frac{c_1\eps r_Kf_a\log K}{2S_{aA}}-\frac{S_{aA}}{4(k_0+1)} t_K^{(+)})} \Big),
\end{eqnarray*}
where the value of the constant $c$ can change from line to line. Assumption \ref{condstrong} then yields
$$ \underset{K \to \infty}{\limsup} \ A(\eps,K)=0, $$
and we end the proof of the second bound by applying \eqref{reducbound2}.
\end{proof}
The following Lemma states that during the second period, the neutral proportion stays constant in the $a$-population.
\begin{lem}\label{majdet}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$:
\begin{equation*}
\underset{K \to \infty}{\limsup}
\ \E\Big[\mathbf{1}_{ T_\eps^K\leq S_\eps^K}\Big|P^K_{a,b_1}(T_\eps^K+t_\eps(\textstyle{\frac{N^K(T_\eps^K)}{K}}))-\frac{z_{Ab_1}}{z_A}\Big|\Big]\leq c\eps.
\end{equation*}
\end{lem}
\begin{proof}
Let us introduce, for $z \in \R_+^\mathcal{E}$ and $\eps>0$ the set $\Gamma$ and the time $t_\eps$ defined as follows:
\begin{equation}\label{tetaCfini1}\Gamma:=\Big\{ z \in \R_+^\mathcal{E}, \Big|z_A- \bar{n}_A\Big|\leq 2\eps \frac{C_{A,a}}{C_{A,A}}, \Big|z_a-\eps\Big|\leq \frac{\eps}{2} \Big\}, \quad t_\eps:=\sup \{ t_\eps(z),z \in \Gamma\},
\end{equation}
where $t_\eps(z)$ has been defined in \eqref{deftepsz1}. According to Assumption \ref{assumption_eq}, $t_\eps<\infty$, and
$$ I(\Gamma,\eps):=\underset{z \in \Gamma}{\inf} \ \underset{t \leq t_\eps}{\inf}\ \{n_A^{(z)}(t),n_a^{(z)}(t)\}>0,$$
and we can introduce the stopping time
\begin{equation}\label{defL}
L_\eps^K(z)=\inf\big\{ t\geq 0, (N_A^{(z,K)}(t),N_a^{(z,K)}(t)) \notin [ I(\Gamma,\eps)K/2,(\bar{n}_A+\bar{n}_a)K]^2 \big\}.
\end{equation}
Finally, we denote by $(\mathcal{F}_t^K, t\geq 0)$ the canonical filtration of $N^K$.
Notice that on the event $\{ T_\eps^K\leq S_\eps^K \}$, $N(T_\eps^K)/K \in \Gamma$, thus $t_\eps(\textstyle{{N(T_\eps^K)}/{K}})\leq t_\eps$.
The semi-martingale decomposition \eqref{defM} and the definition of $G$ in \eqref{defG} then
twice the Strong Markov property and the Cauchy-Schwarz Inequality yield:
\begin{multline}\label{majdebut}
\E\Big[ \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\Big|P_{a,b_1}\Big(T_\eps^K+t_\eps(\textstyle{\frac{N(T_\eps^K)}{K}})\wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})\Big)-P_{a,b_1}(T_\eps^K)\Big|\Big]
\\ \leq \E\Big[ \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\E\Big[ \Big|M_a\Big(T_\eps^K+t_\eps(\textstyle{\frac{N(T_\eps^K)}{K}})\wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})-M_a(T_\eps^K)\Big| +f_a \displaystyle\int_{T_\eps^K}^{T_\eps^K +t_\eps \wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})} |G|\Big|\mathcal{F}_{T_\eps^K}\Big]\Big]
\\ \leq \E\Big[ \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\Big\{\E^{1/2}\Big[ \langle M_a\rangle_{T_\eps^K+t_\eps\wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})}-\langle M_a\rangle_{T_\eps^K} \Big|\mathcal{F}_{T_\eps^K}\Big]+f_a \sqrt{t_\eps} \E^{1/2}\Big[\int_{T_\eps^K}^{T_\eps^K +t_\eps \wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})} G^2\Big|\mathcal{F}_{T_\eps^K}\Big]\Big\}\Big].
\end{multline}
To bound the first term of the right hand side we use the Strong Markov Property, Equation \eqref{crocheten1K1} and the definition of $L_\eps^K$ in \eqref{defL}.
We get
\begin{equation}\label{majt1} \E\Big[ \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\E^{1/2}\Big[ \langle M_a\rangle_{T_\eps^K+t_\eps\wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})}-\langle M_a\rangle_{T_\eps^K} \Big|\mathcal{F}_{T_\eps^K}\Big]\Big]\leq \sqrt{ \frac{2t_\eps C(a,\bar{n}_A+\bar{n}_a)}{I(\Gamma,\eps)K}}. \end{equation}
Let us now focus on the second term. Itô's formula with jumps yields for every $t\geq 0$,
$$ \E\Big[G^2\Big(t \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})\Big)\Big]\leq \E[G^2(0)]+\E\Big[\langle M_A\rangle_{t \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})}\Big]+\E\Big[\langle M_a\rangle_{t \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})}\Big], $$
and adding the Strong Markov Property we get
\begin{multline*}
\mathbf{1}_{ T_\eps^K\leq S_\eps^K} \E^{1/2}\Big[\int_{T_\eps^K}^{T_\eps^K +t_\eps \wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})} G^2\Big|\mathcal{F}_{T_\eps^K}\Big] \\
\leq \underset{z \in \Gamma}{\sup}\ \E^{1/2}\Big[\int_{0}^{t_\eps} \Big( G^2\Big(s \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})\Big)-G^2(0)\Big)ds\Big|N(0)=\lfloor z K \rfloor\Big] + \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\sqrt{t_\eps}|G(T_\eps^K)| \\
\leq \underset{z \in \Gamma}{\sup}\Big[\int_{0}^{t_\eps} \E\Big[\langle M_A\rangle_{s \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})}+\langle M_a\rangle_{s \wedge
L_\eps^K(\textstyle{\frac{N(0)}{K}})}\Big]\Big|N(0)=\lfloor z K \rfloor\Big]ds \Big]^{1/2}+ \mathbf{1}_{ T_\eps^K\leq S_\eps^K}\sqrt{t_\eps}|G(T_\eps^K)|.
\end{multline*}
Using again Equation \eqref{crocheten1K1} and the definition of $L_\eps^K$ in \eqref{defL}, and adding Lemma \ref{majespP} finally lead to
\begin{equation}\label{majt2} \E\Big[ \mathbf{1}_{ T_\eps^K\leq S_\eps^K} \E^{1/2}\Big[\int_{T_\eps^K}^{T_\eps^K +t_\eps \wedge
L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})} G^2\Big|\mathcal{F}_{T_\eps^K}\Big]\Big]
\leq c\Big(\frac{1}{\sqrt{K}}+\eps\Big),\end{equation}
for $\eps$ small enough and $K$ large enough, where $c$ is a finite constant.
Moreover \eqref{result_champa1} ensures that
$$ \P\Big({ T_\eps^K\leq S_\eps^K}, L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})\leq t_\eps(\textstyle{\frac{N(T_\eps^K)}{K}})\Big)
\leq \P\Big(\frac{N(T_\eps^K)}{K} \in \Theta,L_\eps^K(\textstyle{\frac{N(T_\eps^K)}{K}})\leq t_\eps(\textstyle{\frac{N(T_\eps^K)}{K}})\Big)\underset{K \to \infty}{\to}0,
$$
where $\Theta$ has been defined in \eqref{defTheta}.
Adding Equations \eqref{majdebut}, \eqref{majt1}, \eqref{majt2} and Lemma \ref{majespP}, we finally end the proof of Lemma \ref{majdet}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_result} in the strong recombination regime]
Let us focus on the A-population extinction period. We have thanks to the Strong Markov Property:
\begin{multline}
\P \Big( \mathbf{1}_{N(T_\eps^K+t_\eps(N(T_\eps^K)/K))\in \Theta}\Big|P_{a,b_1}(T_{\text{ext}}^K)-
P_{a,b_1}(T_\eps^K+t_\eps(\textstyle{\frac{N(T_\eps^K)}{K}}))\Big|>\sqrt{\eps}\Big)\\
\leq \underset{z \in \Theta}{\sup}\ \P \Big(|P_{a,b_1}(T_{\text{ext}}^K)-P_{a,b_1}(0)|>\sqrt{\eps}\Big| N(0) =\lfloor zK \rfloor\Big).
\end{multline}
But Equation
\eqref{result_champa1} yields $\P(N(T_\eps^K+t_\eps(N(T_\eps^K)/K))/K \in \Theta|N(T_\eps^K)/K\in \Gamma)\to_{K\to \infty}1 $,
and $ \{ T_\eps^K\leq S_\eps^K\} \subset \{N(T_\eps^K)/K\in \Gamma\} $. Adding Equation \eqref{gdeprobabfix} and Lemmas \ref{third_step} and \ref{majdet},
the triangle inequality allows us to conclude
that for $\eps$ small enough
\begin{equation*}
\limsup_{K \to \infty} \hspace{.1cm}\P\Big(\Big|P_{a,b_1}^K(T^K_{\text{ext}})-\frac{z_{Ab_1}}{z_A}\Big|> \sqrt{\eps}\Big| \text{Fix}^K\Big)
\leq c\eps.
\end{equation*}
As $\P(\text{Fix}^K)\to_{K \to \infty} S_{aA}/f_a>0$, it is equivalent to the claim of Theorem \ref{main_result} in the strong regime.
\end{proof}
\section{Proof of Theorem \ref{main_result} in the weak recombination regime}\label{proofweak}
\subsection{Coupling with a four dimensional population process and structure of the proof}
In this section we suppose that Assumptions \ref{assumption_eq}, \ref{defrK} and \ref{condweak} hold.
To lighten the proofs of Sections \ref{coal_reco} to \ref{bedescended} we introduce a coupling of the population process $N$ with a process $\tilde{N}=(\tilde{N}_{\alpha\beta},(\alpha,\beta) \in \mathcal{E})$ defined as follows for every $t \geq 0$:
\begin{eqnarray}
\tilde{N}(t) &=& \mathbf{1}_{t < S_\eps^K}N(t)+ \mathbf{1}_{t \geq S_\eps^K} \Big( e_{Ab_1}N_{Ab_1}((S_\eps^K)^-)+ e_{Ab_2}N_{Ab_2}((S_\eps^K)^-) \\
&&+
\int_0^t\int_{R_+} \Big\{ e_{a b_1}\mathbf{1}_{\theta\leq b^K_{a b_1}(\tilde{N}({s^-}))}+e_{a b_2}\mathbf{1}_{b^K_{a b_1}(\tilde{N}({s^-}))<\theta\leq f_a \tilde{N}_a({s^-})}\nonumber \\
&&\hspace{1cm}\hspace{1cm}-e_{a b_1}\mathbf{1}_{0<\theta-f_a \tilde{N}_a(s^-)\leq d_{a b_1}^K(\tilde{N}({s^-}))}\nonumber \\
&&\hspace{1cm}\hspace{1cm}-e_{a b_2}\mathbf{1}_{d^K_{a b_1}(\tilde{N}({s^-}))<\theta-f_a \tilde{N}_a(s^-)\leq d_a^K(\tilde{N}({s^-}))}\Big\} Q(ds,d\theta)
\Big),
\end{eqnarray}
where the Poisson random measure $Q$ has been introduced in \eqref{defN}.
From \eqref{tildeTbiggerT} we know that
\begin{equation}\label{couplage2}\underset{K \to \infty}{\limsup} \ \P(\{\exists t \leq T_{\eps}^K, N(t) \neq \tilde{N}(t)\}, T_{\eps}^K<\infty)\leq c \eps.\end{equation}
Hence we will study the process $\tilde{N}$ and deduce from this study properties of the dynamics of the process $N$ during the first phase.
Moreover, as
we want to prove convergences on the fixation event $\text{Fix}^K$, defined in \eqref{defText}, inequalities
\eqref{gdeprobabfix} and \eqref{couplage2} allow us, to study the dynamics of $\tilde{N}$ during the first phase, to restrict our attention
to the conditional probability measure:
\begin{equation}\label{defhatP}
\hat{\P}(.)=\P(.|\tilde{T}^K_{\eps} <\infty),
\end{equation}
where $\tilde{T}^K_{\eps} $ is the hitting time of $\lfloor \eps K \rfloor$ by the process $\tilde{N}_a$:
\begin{equation} \label{TKTKeps2} \tilde{T}^K_\eps := \inf \Big\{ t \geq 0, \tilde{N}^K_a(t)= \lfloor \eps K \rfloor \Big\}. \end{equation}
Expectations and variances associated
with this probability measure are denoted by $\hat{\E}$ and $\hat{\var}$ respectively.
Let us notice that, as by definition $\tilde{N}_A(t)\in I_\eps^K $ for all $t \geq 0$,
Coupling \eqref{couplage11} with birth and death processes $Z^-_\eps$ and $Z^+_\eps$ holds up to time $\tilde{T}_\eps^K$ for the process $\tilde{N}$:
\begin{equation}\label{couplage12}
Z^-_\eps(t) \leq \tilde{N}_a^K(t) \leq Z^+_\eps(t), \quad \text{for all } t < \tilde{T}^K_\eps.
\end{equation}\\
The sketch of the proof is the following. We first focus on the neutral proportion in the $a$ population
at time $\tilde{T}_\eps^K$. The idea is to consider the neutral alleles of the $a$ individuals at time $\tilde{T}_\eps^K$ and follow their ancestral lines back until
the beginning of the sweep, to know whether they are descended from the first mutant or not.
Two kinds of events can happen to a neutral lineage: coalescences and m-recombinations
(see Section \ref{coal_reco});
we show that we can neglect the coalescences and the occurrence of several m-recombinations for a lineage during the first period. Therefore,
our approximation of the genealogy is the following: two neutral lineages are independent, and each of them undergoes one recombination with an
$A$-individual during the first period with probability $\rho_K$. If it has undergone a recombination with an $A$-individual,
it can be an allele $b_1$ or $b_2$. Otherwise it is descended from the first mutant and is an allele $b_1$.
To get this approximation we follow the approach presented by Schweinsberg
and Durrett in \cite{schweinsberg2005random}.
In this paper, the authors described the population dynamics by a variation of Moran model with two loci and recombinations. In their model,
the population size was
constant and each individual has a constant selective advantage, $0$ or $s$. In our model the size is varying and each
individual's ability to survive and give birth depends on the population state. After the study of the first period we check that the second and third periods have little influence on the neutral
proportion in the $a$-population.
\subsection{Coalescence and m-recombination times} \label{coal_reco}
Let us introduce the jump times of the stopped Markov process $(\tilde{N}^K(t),t\leq \tilde{T}_\eps^K)$,
$0=:\tau_0^K<\tau_1^K<...< \tau_{J^K}^K:=\tilde{T}_\eps^K$, where $J^K$ denotes the jump number of $\tilde{N}^K$ between $0$ and $\tilde{T}_\eps^K$, and the time of the
$m$-th jump is:
\begin{equation*}\label{deftpssauts}
\tau_m^K= \inf \Big\{ t> \tau_{m-1}^K, \tilde{N}^K(t)\neq \tilde{N}^K(\tau_{m-1}^K) \Big\},\hspace{.1cm} 1\leq m \leq J^K.
\end{equation*}
Let us sample two individuals with the $a$ allele uniformly at random at time $\tilde{T}_\eps^K$ and denote by $\beta_p$ and $\beta_q$ their neutral alleles.
We want to follow their genealogy backward in time and know at each time between $0$ and $\tilde{T}_\eps^K$ the types ($A$ or $a$) of the individuals
carrying $\beta_p$ and $\beta_q$.
We say that $\beta_p$ and $\beta_q$ coalesce at time $\tau_m^K$ if they are carried by two different individuals at time $\tau_{m}^K$ and by the same
individual at time $\tau_{m-1}^K$. In other words the individual carrying the allele $\beta_p$ (or $\beta_q$) at time $\tau_{m}^K$ is a newborn and
has inherited his neutral allele from the individual carrying allele $\beta_q$ (or $\beta_p$) at time $\tau_{m-1}^K$.
The jump number at the coalescence time is denoted by
\begin{equation*}\label{tpscoal}
TC^K(\beta_p,\beta_q):=\left\{\begin{array}{ll}\sup\{ m\leq J^K, \beta_p \text{ and } \beta_q \text{ coalesce at time } \tau_m^K \},& \text{ if $\beta_p$ and $\beta_q$ coalesce }\\
-\infty,& \text{ otherwise}. \end{array}\right.
\end{equation*}
We say that $\beta_p$ m-recombines at time $\tau_m^K$ if the individual carrying the allele $\beta_p$ at time $\tau^K_m$ is a newborn,
carries the allele $\alpha \in \mathcal{A}$, and has inherited his allele $\beta_p$ from an individual
carrying allele $\bar{\alpha}$. In other words, a m-recombination is a recombination which modifies the selected allele connected to the neutral
allele.
The jump numbers of the first and second (backward in time) m-recombinations are denoted by:
\begin{equation*}\label{tpsreco}
TR^K_1(\beta_p):=\left\{\begin{array}{ll}
\sup\{ m\leq J^K, \beta_p \text{ m-recombines at time } \tau_m^K\} ,& \text{if there is at least one m-recombination}\\
-\infty, &\text{otherwise},
\end{array}\right.
\end{equation*}
\begin{equation*}\label{tpsreco}
TR^K_2(\beta_p):=\left\{\begin{array}{ll}
\sup\{ m<TR^K_1(\beta_p), \beta_p \text{ m-recombines at time } \tau_m^K\} ,& \text{if there are at least two}\\
& \text{m-recombinations}\\
-\infty,& \text{otherwise}.
\end{array}\right.
\end{equation*}
Let us now focus on the probability for a coalescence to occur conditionally on the state of the process $(\tilde{N}_A,\tilde{N}_a)$ at two successive
jump times. We denote by
$p_{\alpha_1\alpha_2}^{c_K}(n)$ the probability that the genealogies of two uniformly sampled neutral alleles associated respectively with alleles $\alpha_1$ and
$\alpha_2\in \mathcal{A}$
at time $\tau_m^K$ coalesce at this time conditionally on $(\tilde{N}_A^K(\tau_{m-1}^K),\tilde{N}_a^K(\tau_{m-1}^K))=n\in \N^2$ and on the birth of an individual
carrying allele $\alpha_1 \in \mathcal{A}$ at time $\tau_{m}^K$. Then we have the following result:
\begin{lem}\label{lempcoal}
For every $n=(n_A,n_a) \in \N^2$ and $\alpha \in \mathcal{A}$, we have:
\begin{equation}\label{expr_reco1} p_{\alpha \alpha}^{c_K}(n)=\frac{2}{n_\alpha(n_\alpha+1)}\Big( 1-\frac{r_K f_{\bar{\alpha}} n_{\bar{\alpha}}}
{f_An_A+f_an_a} \Big)\quad \text{and} \quad p_{\alpha \bar{\alpha}}^{c_K}(n) = \frac{r_Kf_{\bar{\alpha}}}{(n_\alpha+1)(f_An_A+f_an_a)}.
\end{equation}
\end{lem}
\begin{proof}
We only state the expression of $p_{\alpha \alpha}^{c_K}(n)$, as the calculations are similar for $p_{\alpha \bar{\alpha}}^{c_K}(n)$. If there is a m-recombination, we cannot have the coalescence of two neutral alleles associated with allele $\alpha$ at time
$\tau_m^K$. With probability $1-r_K f_{\bar{\alpha}}n_{\bar{\alpha}}/(f_An_A+f_an_a)$ there is no
m-recombination and the parent giving its neutral allele carries the allele ${\alpha}$. When there is no m-recombination, two individuals
among those who carry allele
$\alpha$ also carry a neutral allele which was in the same individual at time $\tau_{m-1}^K$. We have a probability ${2}/{n_\alpha(n_\alpha+1)}$
to pick this couple of individuals
among the $(n_\alpha+1)$ $\alpha$-individuals.\end{proof}
\begin{rem}
A m-recombination for a neutral allele associated with an $\alpha$ allele is a coalescence with an $\bar{\alpha}$
individual. Thus if we denote by $p_\alpha^{r_K}(n)$ the probability that an
$\alpha$-individual, chosen uniformly at time $\tau_{m}^K$, is the newborn and underwent a m-recombination at his birth, conditionally on
$(\tilde{N}_A^K(\tau_{m-1}^K),\tilde{N}_a^K(\tau_{m-1}^K))=n \in \N^2$ and on the birth of an individual $\alpha$ at time $\tau_{m}^K$ we get
\begin{equation}\label{expr_reco2}
p_\alpha^{r_K}(n)=n_{\bar{\alpha}}p_{\alpha \bar{\alpha}}^{c_K}(n) = \frac{n_{\bar{\alpha}}r_Kf_{\bar{\alpha}}}{(n_\alpha+1)(f_An_A+f_an_a)}.
\end{equation}
Moreover, if we recall the definition of $I_\eps^K$ in \eqref{compact1}, we notice that there exists a finite constant $c$ such that for
$k < \lfloor \eps K \rfloor$,
\begin{equation} \label{rqpr1}
(1-c\eps)\frac{r_K}{k+1} \leq \inf_{n_A \in I_\eps^K}\hspace{.2cm} p_a^{r_K}(n_A,k) \leq \sup_{n_A \in I_\eps^K}\hspace{.2cm} p_a^{r_K}(n_A,k)\leq \frac{r_K}{k+1}.
\end{equation}
\end{rem}
\subsection{Jumps of mutant population during the first period} \label{jumps}
We want to count the number of coalescences and m-recombinations in the lineages of the two uniformly sampled neutral alleles
$\beta_p$ and $\beta_q$. By definition,
these events can only occur at a birth time. Thus we need to study the upcrossing number of the process $\tilde{N}_a^K$ before
$\tilde{T}_\varepsilon^K$
(Lemma \ref{uphold}). It allows us to prove that the probability
that a lineage is affected by two m-recombinations or that two lineages coalesce, and then (backward in time) are
affected by a m-recombination is negligible (Lemma \ref{deuxreco}).
Then we obtain an approximation of the probability that a lineage is affected by a
m-recombination (Lemma \ref{lemvareta}), and finally we check that two lineages are approximately independent (Equation \eqref{dpdce}).
The last step consists in controlling the neutral proportion in the population $A$ (Lemma \ref{lemmajpro}). Indeed it will give us the probability
that a neutral allele which has undergone a m-recombination is a $b_1$ or a $b_2$.\\
Let us denote by $\zeta_k^K$ the jump number of last visit to $k$ before the hitting of $\lfloor \eps K \rfloor$,
\begin{equation}\label{zeta} \zeta_k^K: = \sup \{m \leq J^K, \tilde{N}_a^K(\tau_m^K)=k \}, \quad 1\leq k \leq \lfloor \varepsilon K \rfloor. \end{equation}
This allows us to introduce for $0< j\leq k<\lfloor \varepsilon K \rfloor$ the number of upcrossings from $k$ to $k+1$ for the
process $\tilde{N}_a^K$ before and after the last visit to $j$:
\begin{equation} \label{U1} U_{j,k}^{(K,1)}:=\# \{m \in \{0,...,\zeta_j^K-1\}, (\tilde{N}_a^K(\tau_m^K),\tilde{N}_a^K(\tau_{m+1}^K))=(k,k+1) \},\end{equation}
\begin{equation} \label{U2} U_{j,k}^{(K,2)}:=\# \{m \in \{\zeta_j^K,...,J^K-1\}, (\tilde{N}_a^K(\tau_m^K),\tilde{N}_a^K(\tau_{m+1}^K))=(k,k+1) \}.\end{equation}
We also introduce the number of jumps of the $A$-population size when there are $k$ $a$-individuals and
the total number of upcrossings from $k$ to $k+1$ before $\tilde{T}_\eps^K$:
\begin{equation} \label{H1} H_{k}^K:=\# \{m < J^K, \tilde{N}_a^K(\tau_m^K)=\tilde{N}_a^K(\tau_{m+1}^K)=k \},\end{equation}
\begin{equation} \label{Uk} U_{k}^K:=U_{j,k}^{(K,1)}+U_{j,k}^{(K,2)}=\# \{m < J^K, (\tilde{N}_a^K(\tau_m^K),\tilde{N}_a^K(\tau_{m+1}^K))=(k,k+1) \}.\end{equation}
The next Lemma states moment properties of these jump numbers.
Recall Definition (\ref{def_s_-s_+1}). Then if we define
\begin{equation}\label{deflambda} \lambda_\eps:=\frac{(1-s_-(\eps))^3}{(1-s_+(\eps))^{2}}, \end{equation}
which belongs to $(0,1)$ for $\eps$ small enough, we have
\begin{lem}\label{uphold}
There exist two positive and finite constants $\eps_0$ and $c$ such that for $\eps\leq \eps_0$, $K$ large enough
and $1\leq j \leq k< \lfloor \varepsilon K \rfloor$,
\begin{equation} \label{E''U} \hat{\E}[H_{j}^{K}]\leq \frac{12f_A\bar{n}_AK}{s^4_-(\eps)f_aj} ,
\quad \hat{\E} [ (U_{j,k}^{(K,1)})^2 ]\leq \frac{4 \lambda_\eps^{k-j}}{ s^7_-(\eps)(1-s_+(\eps))},
\end{equation}
\begin{equation} \label{majcov} \hat{\E}[(U_{j}^{K})^2]\leq \frac{2}{s^2_-(\eps)} ,\quad \Big|\hat{\cov}(U_{j,k}^{(K,2)},U_{j}^{K})\Big|\leq c(\eps+(1-s_-( \eps))^{k-j}),\end{equation}
and
\begin{equation}\label{espnoreco}
r_K\Big| \sum_{k=1}^{\lfloor \eps K \rfloor -1} \frac{\hat{\E}[U_k^K]}{k+1}-\frac{f_a\log K}{S_{aA}} \Big|\leq c\eps.
\end{equation}
\end{lem}
This Lemma is widely used in Sections \ref{recocoal} and \ref{bedescended}. Indeed, we shall decompose on the possible states of the population when a
birth occurs, and apply Equations \eqref{expr_reco1} and \eqref{expr_reco2}
to express the probability of coalescences and m-recombinations at each birth event. The proof of Lemma \ref{uphold} is quite technical and is
postponed to Appendix \ref{prooflemma}.
\subsection{Negligible events}\label{recocoal}
The next Lemma bounds the probability that two m-recombinations occur in a neutral lineage and the probability that a couple
of neutral lineages coalesce and then m-recombine when we consider the genealogy backward in time.
\begin{lem}\label{deuxreco}
There exist two positive finite constants $c$ and $\eps_0$ such that for $K \in \N$ and $\eps\leq \eps_0$,
\begin{equation*}\label{prop24}
\hat{\P}\Big(TR_2^K(\beta_p)\neq -\infty\Big)\leq \frac{c}{\log K}, \quad \text{and} \quad \hat{\P}\Big(0 \leq TR^K_1(\beta_p)
\leq TC^K(\beta_p,\beta_q)\Big)\leq \frac{c}{\log K}. \end{equation*}
\end{lem}
\begin{proof}
By definition, the neutral allele $\beta_p$ is associated with an allele $a$ at time $\tilde{T}_\eps^K$. If there are at least two m-recombinations it implies
that there exists a time between $0$ and $\tilde{T}_\eps^K$ at which $\beta_p$ has undergone
a m-recombination when it was associated with an allele $A$.
We shall work conditionally on the stopped process $((\tilde{N}_A(\tau_m^K),\tilde{N}_a(\tau_m^K)), m \leq J^K)$ and decompose according
to the $a$-population size when this m-recombination occurs. We get the inclusion:
$$ \{ TR_2^K(\beta_p)\neq -\infty \} \subset \underset{k=1}{\overset{\lfloor \eps K\rfloor-1}{\bigcup}}\underset{m=1}{\overset{J^K}{\bigcup}}
\Big\{ TR_2^K(\beta_p)=m, \tilde{N}_a(\tau^K_{m-1})=\tilde{N}_a(\tau^K_{m})=k \Big\}. $$
We recall the definition of
$I_\eps^K$ in \eqref{compact1}. Thanks to Equations (\ref{expr_reco2}) and
\eqref{E''U}, we get:
\begin{eqnarray*}
\hat{\P}(TR_2^K(\beta_p)\neq -\infty)\leq \underset{k=1}{\overset{\lfloor \varepsilon K \rfloor-1}{\sum}}
\sup_{n_A\in I_\eps^K} p_A^{r_K}(n_A,k)\hat{\E}[H_{k}^{K}]\leq \frac{12r_K\bar{n}_A\eps}{s^4_-(\eps)(\bar{n}_A-2\eps C_{A,a}/C_{A,A})^2} .
\end{eqnarray*}
Assumption \ref{condweak} on weak recombination completes the proof of the first inequality in Lemma \ref{deuxreco}.
The proof of the second one is divided in two steps, presented after introducing, for $(\alpha,\alpha') \in \mathcal{A}^2, m \leq J^K$ the notations
$$ (\alpha \beta_p)_m:=\{ \text{the neutral allele }\beta_p \text{ is associated with the allele } \alpha \text{ at time } \tau_m^K \}, $$
$$ (\alpha \beta_p,\alpha' \beta_q)_m:=(\alpha \beta_p)_m \cap (\alpha' \beta_q)_m .$$
\noindent \textit{First step:} We show that the probability that $\beta_p$ is associated with an allele $A$ at the coalescence time is negligible. We first recall the inclusion,
$$ \{ TC^K(\beta_p,\beta_q)\neq -\infty, (A\beta_p)_{TC^K(\beta_p,\beta_q)} \} \subset \underset{k=1}{\overset{\lfloor \eps K\rfloor-1}{\bigcup}}\underset{m=1}{\overset{J^K}{\bigcup}}
\Big\{ TC^K(\beta_p,\beta_q)=m, \tilde{N}_a(\tau^K_{m-1})=k, (A\beta_p)_m \Big\} , $$
and decompose on the possible selected alleles associated with $\beta_q$ and on the type of the newborn at
the coalescence time.
Using Lemma \ref{lempcoal},
Equations \eqref{E''U} and \eqref{majcov}, and
$r_K\leq 1$, we get
\begin{multline}
\hat{\P}(TC^K(\beta_p,\beta_q)\neq -\infty, (A\beta_p)_{TC^K(\beta_p,\beta_q)}) \\\leq
\underset{k=1}{\overset{\lfloor \varepsilon K \rfloor-1}{\sum}}
\Big[\sup_{n_A \in I_\eps^K} p_{AA}^{c_K}(n_A,k) + \sup_{n_A \in I_\eps^K} p_{Aa}^{c_K}(n_A,k)\Big]\hat{\E}[H_{k}^{K}]
+\sup_{n_A \in I_\eps^K} p_{aA}^{c_K}(n_A,k)\hat{\E}[U_{k}^{K}]
\leq \frac{c}{K}\underset{k=1}{\overset{\lfloor \varepsilon K \rfloor-1}{\sum}} \frac{1}{k},
\end{multline}
for a finite $c$, which is of order $\log K/K$.\\
\noindent \textit{Second step:} Then, we focus on the case where
$\beta_p$ and $\beta_q$ are associated with an allele $a$ at the coalescence time.
The inclusion
\begin{eqnarray*}
\Big\{\tilde{N}_a\Big(\tau_{TC^K(\beta_p,\beta_q)-1}^K\Big)=k, (a\beta_p, a\beta_q)_{TC^K(\beta_p,\beta_q)}\Big\}\subset \underset{m=1}{\overset{J^K}{\bigcup}}
\Big\{ TC^K(\beta_p,\beta_q)=m, \tilde{N}_a(\tau^K_{m-1})=k, (a\beta_p,a\beta_q)_m \Big\} ,
\end{eqnarray*}
and Equations (\ref{expr_reco1}) and \eqref{majcov} yield for every $k \in \{1,...,\lfloor \eps K\rfloor-1\}$:
\begin{equation*}\label{lem42} \hat{\P}\Big(\tilde{N}_a\Big(\tau_{TC^K(\beta_p,\beta_q)-1}^K\Big)=k, (a\beta_p, a\beta_q)_{TC^K(\beta_p,\beta_q)}\Big) \leq
\underset{n_A \in I_\eps^K}{\sup}\hspace{.2cm} p_{aa}^{c_K}(n_A,k)\hat{\E}[U_{k}^{K}]\leq \frac{4}{{s^2_-(\eps)}k(k+1)}. \end{equation*}
If $\beta_p$ and $\beta_q$ coalesce then undergo their first m-recombination when we look backward in time, and if the $a$-population has the size $k$
at the coalescence time, it implies that the m-recombination occurs before the $\zeta^K_k$-th jump when we look forward in time.
For $k,l < \lfloor \eps K\rfloor$,
\begin{multline*}
\hat{\P}\Big(\tilde{N}_a\Big(\tau^K_{TR_1^K(\beta_p)}\Big)=l,0 \leq TR_1^K(\beta_p) \leq TC^K(\beta_p,\beta_q)\Big| \tilde{N}_a\Big(\tau_{TC^K(\beta_p,\beta_q)-1}^K\Big)=k, (a\beta_p, a\beta_q)_{TC^K(\beta_p,\beta_q)}\Big)\\
\leq \sup_{n_A\in I_\eps^K} p^{r_K}_a(n_A,l)\Big( \mathbf{1}_{k>l}\hat{\E} [U_{l}^{K}]+\mathbf{1}_{k\leq l}\hat{\E} [U_{k,l}^{(K,1)}] \Big)
\leq
\frac{2r_K}{(l+1)s^2_-(\eps)}\Big( \mathbf{1}_{k>l}+ \frac{2\mathbf{1}_{k\leq l}\lambda_\eps^{l-k}}{s_-^5(\eps)(1-s_+(\eps))}
\Big),
\end{multline*}
where the last inequality is a consequence of \eqref{rqpr1}, \eqref{E''U} and \eqref{majcov}. The two last equations finally yield the existence of a finite $c$ such that for every $K \in \N$:
\begin{equation*} \hat{\P}(0 \leq TR_1^K(\beta_p) \leq TC^K(\beta_p,\beta_q), (a\beta_p, a\beta_q)_{TC^K(\beta_p,\beta_q)})
\leq cr_K \underset{k,l=1}{\overset{\lfloor \varepsilon K \rfloor}{\sum}} \frac{ \mathbf{1}_{k>l}+ \mathbf{1}_{k\leq l}\lambda_\eps^{l-k}}{k(k+1)(l+1)} \leq cr_K,
\end{equation*}
which completes the proof of Lemma \ref{deuxreco} with Assumption \ref{condweak}. \end{proof}
\subsection{Probability to be descended from the first mutant} \label{bedescended}
We want to estimate the probability for the neutral lineage of $\beta_p$ to undergo no m-recombination. Recall Definition \eqref{defrhoK}:
\begin{lem}\label{lemvareta}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$:
$$ \limsup_{K \to \infty}\Big| \hat{\P}(TR^K_1(\beta_p)=-\infty)-(1-\rho_K) \Big|\leq c\eps^{1/2} .$$
\end{lem}
\begin{proof}
We introduce $\rho_m^K$, the conditional probability that the neutral lineage
of $\beta_p$ m-recombines at time $\tau_m^K$, given $(\tilde{N}_A(\tau^K_n),\tilde{N}_a(\tau^K_n),n\leq J^K)$ and given that
it has not m-recombined during the time interval $]\tau_m^K,T_\eps^K]$.
The last condition implies that $\beta_p$ is associated with an allele $a$ at time $\tau_m^K$.
\begin{equation}
\label{defrhoKm} \rho^K_m:=\mathbf{1}_{\{ \tilde{N}_a^K(\tau_m^K)-\tilde{N}_a^K(\tau_{m-1}^K)=1 \}} p_a^{r_K}(\tilde{N}_A^K(\tau_{m-1}^K),\tilde{N}_a^K(\tau_{m-1}^K)).
\end{equation}
We also introduce $\eta^K$, the sum of these conditional
probabilities for $1\leq m \leq J^K$:
\begin{equation*}
\eta^K:={\sum_{m=1}^{J^K}}\rho^K_m.
\end{equation*}
We want to give a rigourous meaning to the sequence of equivalencies:
$$ \hat{\P}\Big(TR^K_1(\beta_p)=-\infty\Big|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}\Big)=\prod_{m=1}^{J^K}(1-\rho^K_m) \sim \prod_{m=1}^{J^K}e^{-\rho^K_m}\sim e^{-\E[\eta^K]}, $$
when $K$ goes to infinity. Jensen's Inequality, the triangle inequality, and the Mean Value Theorem imply
\begin{multline}\label{ineg_tri} \hat{\E}\Big| \hat{\P}\Big(TR^K_1(\beta_p)=-\infty\Big|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}\Big)
-(1-\rho_K) \Big|
\leq \\
\hat{\E}\Big|\hat{\P}\Big(TR^K_1(\beta_p)=-\infty\Big|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}\Big)-e^{-\eta^K} \Big|
+ \Big|e^{-\hat{\E}\eta^K}-(1-\rho_K) \Big|+ \hat{\E}\Big|\eta^K-\hat{\E}\eta^K \Big| .\end{multline}
We aim to bound the right hand side of (\ref{ineg_tri}).
The bounding of the first term follows the method developed in Lemma 3.6 in \cite{schweinsberg2005random}. We refer to this proof, and get
the following Poisson approximation
\begin{eqnarray}\label{bound1} \hat{\E}\Big|\hat{\P}\Big(TR^K_1(\beta_p)=-\infty\Big|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}\Big)-e^{-\eta^K} \Big| &
\leq &
\sum_{k=1}^{\lfloor \varepsilon K \rfloor-1}\sup_{n_A \in I_\eps^K} \Big( p_a^{r_K}(n_A,k)\Big)^2\hat{\E}[U_{k}^{K}] \nonumber \\
&\leq &\frac{\pi^2 r_K^2}{3s_-^2(\eps)} ,\end{eqnarray}
where $I_\eps^K$ has been defined in \eqref{compact1} and the last inequality follows from \eqref{rqpr1} and \eqref{majcov}.
To bound the second term, we need to estimate $\hat{\E}[\eta^K]$. Inequality \eqref{rqpr1} implies
\begin{equation}\label{infpra2}
(1-c\eps)r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1}\frac{U_k^K}{k+1}\leq \eta^K \leq r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1}\frac{U_k^K}{k+1}.
\end{equation}
Adding \eqref{espnoreco} we get that for $\eps$ small enough,
\begin{equation}\label{secondterm}
\limsup_{K \to \infty} \Big|\exp(-\hat{\E}[\eta^K])-(1-\rho_K)\Big|\leq c\eps.
\end{equation}
The bounding of the last term of (\ref{ineg_tri}) requires a fine study of dependences between upcrossing numbers before and after the last visit
to a given
integer by the mutant population size. In
particular, we widely use Equation (\ref{majcov}). We observe that $\hat{\E}|\eta^K-\hat{\E}\eta^K | \leq (\hat{\var}\hspace{.1cm}\eta^K)^{1/2}$,
but the variance of $\eta^K$ is quite involved to study and according to Assumption \ref{condweak} and Equations \eqref{infpra2} and \eqref{majcov},
\begin{eqnarray}\label{tildeeta1}\Big|\hat{\var}\hspace{.1cm}\eta^K-\hat{\var}\hspace{.1cm}\Big( r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1}\frac{U_k^K}{k+1} \Big)\Big|
&\leq & c \varepsilon \hat{\E}\Big[\Big( r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1}\frac{U_k^K}{k+1} \Big)^2\Big] \nonumber \\
& \leq & c\eps r_K^2 \sum_{k,l=1}^{\lfloor \eps K \rfloor-1}\frac{\hat{\E}[(U_k^K)^2]+\hat{\E}[(U_l^K)^2]}{(k+1)(l+1)}
\leq c\eps,\end{eqnarray}
for a finite $c$ and $K$ large enough. Let $k \leq l < \lfloor \eps K\rfloor $,
and recall that by definition, $U_{l}^{K}=U_{k,l}^{(K,1)}+U_{k,l}^{(K,2)}$. Then we have
\begin{eqnarray*}
\Big|\hat{\cov}(U_{k}^{K},U_{l}^{K})\Big| \leq \Big(\hat{\E}[(U_{k}^{K})^2]\hat{\E}[(U_{k,l}^{(K,1)})^2]\Big)^{1/2}+
\Big|\hat{\cov}(U_{k}^{K},U_{k,l}^{(K,2)})\Big|.
\end{eqnarray*}
Applying Inequalities \eqref{E''U} and \eqref{majcov} and noticing that $(1-s_-(\eps))<\lambda_\eps^{1/2}<1$
(recall the definition of $\lambda_\eps$ in \eqref{deflambda}) lead to
\begin{eqnarray}\label{majcov2}
\Big|\hat{\cov}(U_{k}^{K},U_{l}^{K})\Big| \leq c( \lambda_\eps^{(l-k)/2}+\eps +(1-s_-(\eps))^{l-k} )\leq
c( \lambda_\eps^{(l-k)/2}+\eps)
\end{eqnarray}
for a finite $c$ and $\eps$ small enough. We finally get:
\begin{eqnarray}\label{varetatilde}
\hat{\var}\Big( r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1}\frac{U_k^K}{k+1} \Big) & \leq & 2r_K^2 \sum_{k=1}^{\lfloor \varepsilon K \rfloor -1}
\frac{1}{k+1} \sum_{l=k}^{\lfloor \varepsilon K \rfloor -1} \frac{\hat{\cov}(U_{k}^{K},U_{l}^{K})}{l+1} \nonumber \\
& \leq & cr_K^2\sum_{k=1}^{\lfloor \varepsilon K \rfloor -1}\frac{1}{k+1} \sum_{l=k}^{\lfloor \varepsilon K \rfloor -1} \frac{\lambda_\eps^{(l-k)/2}+\eps}{l+1} \leq cr_K^2 {\eps}\log^2 K,
\end{eqnarray}
where we used \eqref{majcov2} for the second inequality.
Applying Jensen's Inequality to the left hand side of (\ref{ineg_tri}) and adding Equations
(\ref{bound1}), (\ref{secondterm}), \eqref{tildeeta1} and (\ref{varetatilde}) we obtain
\begin{multline}\label{finL4.1} \hat{\E}\Big| \hat{\P}\Big(TR^K_1(\beta_p)=-\infty\Big|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}\Big)
-(1-\rho_K) \Big|\\
\leq
\frac{\pi^2 r_K^2}{3s_-^2(\eps)}+ c\eps+c(\eps+r_K^2\eps \log^2 K)^{1/2}. \end{multline}
This completes the proof of Lemma \ref{lemvareta}.
\end{proof}
We finally focus on the dependence between the genealogies of $\beta_p$ and $\beta_q$,
and to this aim follow \cite{schweinsberg2005random} pp. 1622 to 1624 in the case $J=1$.
We define for $m \leq J^K$ the random variable
$$ K_m=\mathbf{1}_{\{TR^K_1(\beta_p)\geq m\}}+\mathbf{1}_{\{TR^K_1(\beta_q)\geq m\}},$$
which counts the number of neutral lineages which recombine after the $m$-th jump (forward in time) among the lineages of $\beta_p$ and $\beta_q$.
First we will show that for $d \in\{0,1,2\}$,
\begin{multline}\label{maj1K0} \Big| \hat{\P}(K_0=d)-{d \choose 2}\ \hat{\E} \Big[\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})^d \\
(1-\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})^{2-d}\Big] \Big|\leq \frac{c}{\log K}, \end{multline}
for $\eps$ small enough and $K$ large enough, where $c$ is a finite constant.
The proof of this inequality can be found in \cite{schweinsberg2005random} pp. 1622-1624 and relies on Equation (\ref{lemme51}). The idea is to couple the process
$(K_m,0\leq m\leq J^K)$ with a process $(K'_m,0\leq m\leq J^K)$ satisfying for every $m \leq J^K$,
$$ \mathcal{L}\Big( K'_{m-1}-K'_m | (\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}, (K'_u )_{m\leq u \leq J^K} \Big)=Bin(2-K'_m, \rho^K_m), $$
where $Bin(n,p)$ denotes the binomial distribution with parameters $n$ and $p$, and $\rho_m^K$ has been defined in \eqref{defrhoKm}. This implies
$$ \mathcal{L}\Big( K'_{0} \Big| (\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K} \Big)=Bin(2, \hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})), $$
and the coupling yields
$$ \hat{\P}(K'_m\neq K_m \text{ for some } 0\leq m\leq J^K)\leq c/\log K ,$$
for $\eps$ small enough and $K$ large enough, where $c$ is a finite constant. In particular, the weak dependence between two neutral lineages stated in Lemma \ref{deuxreco} is needed in this proof.
We now aim at proving that
\begin{multline} \Big| \hat{\E} [\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})^d\\
(1-\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})^{2-d}]-
\rho_K^d(1-\rho_K)^{2-d} \Big|\leq c \eps^{1/2},\end{multline}
where we recall the definition of $\rho_K$ in \eqref{defrhoK}. Equation (\ref{lemme343344}) involves
\begin{multline*} \Big| \hat{\E} [\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})^d(1-\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K}))^{2-d}]\\
- \rho_K^d(1-\rho_K)^{2-d} \Big|
\leq 2 \hat{\E}|\hat{\P}(TR^K_1(\beta_p)\geq 0|(\tilde{N}_A(\tau^K_m),\tilde{N}_a(\tau^K_m))_{m\leq J^K})-\rho_K|.\end{multline*}
Applying Equation \eqref{finL4.1} and adding \eqref{maj1K0}, we finally get for $d$ in $\{0,1,2\}$,
\begin{equation} \label{dpdce}
\limsup_{K \to \infty} \Big| \hat{\P}(\mathbf{1}_{TR^K_1(\beta_p)\geq 0}+\mathbf{1}_{TR^K_1(\beta_p)\geq 0}=d)
-{d \choose 2} \rho_K^d(1-\rho_K)^{2-d} \Big|\leq c \eps^{1/2} .
\end{equation}
\subsection{Neutral proportion at time $T_\eps^K$}
Let us again focus on the population process $N$. By abuse of notation, we still use $(TR_i^K(\beta_p), i \in \{1,2\})$ and
$TC^K(\beta_p,\beta_q)$ to denote recombination and coalescence times of the neutral genealogies for the process $N$.
According to Lemma \ref{deuxreco}, Equation \eqref{dpdce}, and Coupling \eqref{couplage2},
\begin{equation*}
\limsup_{K \to \infty} {\P}\Big(\{TR_2^K(\beta_p)\geq 0\}\cup \{0 \leq TR^K_1(\beta_p)
\leq TC^K(\beta_p,\beta_q)\}\Big|T_\eps^K<\infty\Big)\leq c\eps, \end{equation*}
and
\begin{equation*}
\limsup_{K \to \infty} \Big| {\P}(\mathbf{1}_{TR^K_1(\beta_p)\geq 0}+\mathbf{1}_{TR^K_1(\beta_p)\geq 0}=d|T_\eps^K<\infty)
-{d \choose 2} \rho_K^d(1-\rho_K)^{2-d} \Big|\leq c \eps^{1/2} ,
\end{equation*}
for a finite $c$ and $\eps$ small enough.
Hence, it is enough to distinguish two cases for the randomly chosen neutral allele $\beta_p$:
either its lineage has
undergone one m-recombination, or no m-recombination.
In the second case, $\beta_p$ is a $b_1$. In the first one, the probability that $\beta_p$ is a $b_1$ depends on the neutral
proportion in the $A$ population at the coalescence time. We now state that this proportion stays nearly constant during the first period.
\begin{lem}\label{lemmajpro}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$,
\begin{equation*}
\underset{K \to \infty}{\limsup}\hspace{.1cm} \P \Big( \underset{t \leq T^K_\eps}{\sup} \Big|P_{A,b_1}^K(t)-\frac{z_{Ab_1}}{z_A}\Big|>\sqrt{\varepsilon}, T^K_\eps<\infty \Big) \leq c \varepsilon .
\end{equation*}
\end{lem}
Lemma \ref{lemmajpro}, whose proof is postponed to Appendix \ref{prooflemma}, allows us to state the following lemma.
\begin{lem}\label{firstphaseweak}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$,
\begin{equation*}
\limsup_{K \to \infty} \hat{\P}\Big( \Big| P^K_{a,b_2}(T^K_\eps) - \frac{z_{Ab_2}}{z_A}\rho_K \Big|
>\eps^{1/6} \Big)\leq c\eps^{1/6} .\end{equation*}
\end{lem}
\begin{proof}
The sequence $(\beta_i, i \leq \lfloor \eps K\rfloor)$ denotes the neutral alleles carried by the $a$-individuals at time $T_\eps^K$ and
$$ A_2^K(i):= \{\beta_i \text{ has undergone exactly one m-recombination and is an allele } b_2\} .$$
If $\beta_i$ is a $b_2$, either its genealogy has undergone one m-recombination with an individual $Ab_2$, or
it has undergone more than two m-recombinations. Thus
\begin{equation*}
0\leq {N}^K_{ab_2}(T^K_\eps)- \sum_{i=1}^{\lfloor \eps K\rfloor} \mathbf{1}_{A_2^K(i)}\leq \sum_{i=1}^{\lfloor \eps K\rfloor} \mathbf{1}_{\{ TR^K_2(\beta_i)\neq -\infty \}}.
\end{equation*}
Moreover, the probability of $A_2^K(i)$ depends on the neutral proportion in the $A$-population when $\beta_i$ m-recombines. For $i \leq \lfloor \eps K\rfloor$,
\begin{equation} \label{expl}\Big| \hat{\P} \Big(A^K_2(i)\Big|TR^K_1(\beta_i)\geq 0,TR^K_2(\beta_i)=-\infty,\sup_{t \leq T^K_\eps}
\Big|{P}^K_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}\Big|\leq\sqrt{\varepsilon} \Big) -\Big(1-\frac{z_{Ab_1}}{z_A}\Big)\Big| \leq \sqrt{\varepsilon}. \end{equation}
Lemma \ref{lemmajpro} and Equation \eqref{tildeTbiggerT} ensure that $\limsup_{K\to\infty} \hat{\P}({\sup}_{t \leq {T}^K_\eps}
|{P}^K_{A,b_1}(t)-z_{Ab_1}/z_A|>\sqrt{\varepsilon})\leq c\eps$, and Lemmas \ref{deuxreco} and \ref{lemvareta}, and Coupling \eqref{couplage2} that
$|\hat{\P}(TR^K_1(\beta_i)\geq 0,TR^K_2(\beta_i)=-\infty) -\rho_K|\leq c\eps$. It yields:
\begin{equation*} \Big| \hat{\P} \Big(TR^K_1(\beta_i)\geq 0,TR^K_2(\beta_i)=-\infty,\sup_{t \leq {T}^K_\eps}
\Big|{P}^K_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}\Big|\leq\sqrt{\varepsilon} \Big) -\rho_K\Big| \leq c\sqrt{\varepsilon} \end{equation*}
for a finite $c$ and $\eps$ small enough. Adding \eqref{expl} we get:
\begin{equation} \label{espprop} \limsup_{K \to \infty}\Big|\hat{\E}[{P}^K_{a,b_2}({T}^K_\eps)]- \rho_K
\Big(1-\frac{z_{Ab_1}}{z_A}\Big) \Big|\leq c\sqrt{\eps}. \end{equation}
In the same way, using the weak dependence between lineages stated in \eqref{dpdce} and Coupling \eqref{couplage2}, we prove that
$ \limsup_{K \to \infty}|\hat{\E}[{P}^K_{a,b_2}(T^K_\eps)^2]- \rho_K^2(1-z_{Ab_1}/z_A)^2 |\leq c\sqrt{\eps}.$
This implies, adding (\ref{espprop})
that $ \limsup_{K \to \infty}\hat{\var}( {P}^K_{a,b_2}(T^K_\eps) ) \leq c \sqrt{\varepsilon}$. We end the proof by using Chebyshev's Inequality.
\end{proof}
\subsection{Second and third periods}
Thanks to Lemma \ref{third_step} we already know that with high probability the neutral proportion in the $a$-population stays nearly constant
during the third phase. We will prove that this is also the case during the second phase
This is due to the short duration of
this period, which does not go to
infinity with the carrying capacity $K$.
\begin{lem}\label{lemma77}
There exist two positive finite constants $c$ and $\eps_0$ such that for $\eps\leq \eps_0$,
\begin{equation}
\limsup_{K \to \infty} \hat{\P}\Big( \Big| P^K_{a,b_1}(T_{\textnormal{ext}}^K)-P^K_{a,b_1}(T_\eps^K) \Big|>\eps^{1/3} \Big)\leq c\eps^{1/3}.
\end{equation}
\end{lem}
\begin{proof}
Let us introduce the stopping time $V_\eps^K$:
$$ V_\eps^K:=\inf \Big\{t \leq t_\eps, {\sup}_{\alpha \in\mathcal{A}}\Big|{N}_\alpha^K(T^K_\eps+t)/K-{n_\alpha}^{({N}(T^K_\eps)/K)}(t)\Big|>\eps^3 \Big\}, $$
where $t_\eps$ has been introduced in \eqref{tetaCfini1}.
Recall that $(\mathcal{F}_t^K, t \geq 0)$ denotes the canonical filtration of $N^K$.
The Strong Markov Property, Doob's Maximal Inequality and Equation \eqref{crocheten1K1} yield:
\begin{multline*}
\P\Big( {T_\eps^K \leq S^K_\eps},\sup_{t \leq t_\eps}|{M}_{a}^K(T^K_\eps+t\wedge V_\eps^K)-{M}_{a}^K(T^K_\eps)|>\sqrt{\eps} \Big)\\=
\E\Big[ \mathbf{1}_{T_\eps^K \leq S^K_\eps}{\P}\Big(\sup_{t \leq t_\eps}|{M}_{a}^K(T^K_\eps+t\wedge V_\eps^K)-{M}_{a}^K(T^K_\eps)|>\sqrt{\eps}|\mathcal{F}_{T_\eps^K}\Big) \Big]\\ \leq
\frac{1}{\eps}\E\Big[ \mathbf{1}_{T_\eps^K \leq S^K_\eps}\Big(\langle M_a \rangle_{T^K_\eps+t_\eps\wedge V_\eps^K}- \langle M_a \rangle_{T^K_\eps} \Big)\Big]
\leq \frac{t_\eps C(a,\bar{n}_a+\bar{n}_A)}{\eps (I(\Gamma,\eps)-\eps^3)K},
\end{multline*}
for $\eps$ small enough, where $I(\Gamma,\eps)$ has been defined in \eqref{tetaCfini1}. But according to Equation
(\ref{result_champa1}) with $\delta = \eps^3$,
$\limsup_{K \to \infty}\P(V_\eps^K< t_\eps| T_\eps^K \leq S^K_\eps)=0$.
Moreover, Equations \eqref{defM} and \eqref{boundmart} imply for every $t \geq 0$
$$ \sup_{t \leq t_\eps}|{P}_{a,b_1}^K(T^K_\eps+t)-{P}_{a,b_1}^K(T^K_\eps)|\leq \sup_{t \leq t_\eps}|{M}_{a}^K(T^K_\eps+t)-{M}_{a}^K(T^K_\eps)|+r_Kt_\eps f_a .$$
As $r_K$ goes to $0$ under Assumption \ref{condweak}, we finally get:
\begin{equation} \label{lemmeteps} \underset{K \to \infty}{\limsup}\hspace{.1cm}\P \Big( \underset{t \leq t_\eps}{\sup}|{P}_{a,b_1}^K(T^K_\eps+t)-{P}_{a,b_1}^K(T^K_\eps)|>\sqrt{\varepsilon},T_\eps^K \leq
S^K_\eps \Big)=0. \end{equation}
Adding Lemma \ref{third_step} ends the proof of Lemma \ref{lemma77}.
\end{proof}
\subsection{End of the proof of Theorem \ref{main_result} in the weak recombination regime}
Thanks to Lemmas \ref{firstphaseweak} and \ref{lemma77} we get that for $\eps$ small enough,
$$ \limsup_{K \to \infty} \hat{\P}\Big( \Big| P^K_{a,b_2}(T^K_{\text{ext}}) - \rho_K \frac{z_{Ab_2}}{z_A} \Big|>2\eps^{1/6} \Big)\leq c\eps^{1/6} .
$$
Moreover, \eqref{gdeprobabfix} ensures that $\liminf_{K\to \infty}\P(T^K_{\eps} \leq S^K_\eps|\text{Fix}^K)\geq 1-c\eps$, which implies
$$ \limsup_{K \to \infty} {\P}\Big(\mathbf{1}_{\text{Fix}^K} \Big| P^K_{a,b_2}(T^K_{\textnormal{ext}})
- \rho_K \frac{z_{Ab_2}}{z_A} \Big|>2\eps^{1/6} \Big)\leq c\eps^{1/6} .
$$
This is equivalent to the convergence in probability and ends the proof of Theorem \ref{main_result}.
\renewcommand\thesection{\Alph{section}}
\setcounter{section}{0}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\section{Technical results}\label{known_results}
This section is dedicated to technical results needed in the proofs.
We first present some results stated in \cite{champagnat2006microscopic}.
We recall Definitions \eqref{deffitinv1}, \eqref{defText}, \eqref{defTheta}, \eqref{SKeps}, \eqref{TKTKeps1} and \eqref{tetaCfini1} and that the notation $.^K$ refers to the processes that satisfy Assumption \ref{defrK}. Proposition \ref{fix_champ} is a direct
consequence of
Equations (42), (71), (72) and (74) in \cite{champagnat2006microscopic}:
\begin{pro}\label{fix_champ}
There exist two posivite finite constants $M_1$ and $\eps_0$ such that for every $\eps\leq \eps_0$
\begin{equation}\label{taillepopfinale}
\underset{K \to \infty}{\lim} \P\Big( \Big|N_a^K(T^K_{\textnormal{ext}})-K\bar{n}_a\Big|>\eps K \Big| \textnormal{Fix}^K \Big)=0,
\quad \text{and} \quad
\underset{K \to \infty}{\limsup}\ \Big|\P(T^K_\eps<\infty)- \frac{S_{aA}}{f_a}\Big|\leq M_1\eps.
\end{equation}
Moreover there exists $M_2>0$ such that for every $\eps\leq \eps_0$, the probability of the event
\begin{equation}\label{def_FepsK} F_\eps^K=\Big\{T^K_\eps\leq S^K_\eps, N_A^K(T^K_\eps+t_\eps)<\frac{\eps^2 K}{2},
| N_a^K(T^K_\eps+t_\eps)-\bar{n}_a K|<\frac{\eps K}{2}\Big\} \end{equation}
satisfies
\begin{equation}\label{res_champ}\underset{K \to \infty}{\liminf}\ \P(T^K_\eps\leq S^K_\eps)\geq \underset{K \to \infty}{\liminf}\ \P(F_\eps^K)\geq \frac{S_{aA}}{f_a}-M_2\eps, \end{equation}
and if $z \in \Theta$, then there exist two posivite finite constants $V$ and $c$ such that:
\begin{equation}\label{resSepsKz}
\underset{K \to \infty}{\liminf} \ \P({U}^K_\eps(z)>e^{VK})\geq 1-c\eps.
\end{equation}
\end{pro}
Thanks to these results we can state the following Lemma, which motivates the coupling of $N$ and $\tilde{N}$ and allows us to focus on the event $\{ \tilde{T}_\eps^K <\infty \}$ rather
than on $\text{Fix}^K$ in Section \ref{proofweak}.
\begin{lem}
There exist two posivite finite constants $c$ and $\eps_0$ such that for every $\eps\leq \eps_0$
\begin{equation} \label{tildeTbiggerT}
\underset{K \to \infty}{\limsup} \ \P(T_{\eps}^K<\infty,T^K_\eps>S^K_\eps)\leq c \eps,
\end{equation}
and
\begin{equation}\label{gdeprobabfix}
\limsup_{K\to \infty}\Big[\P(\{T^K_{\eps} \leq S^K_\eps\} \setminus \textnormal{Fix}^K)+\P(\textnormal{Fix}^K \setminus \{T^K_{\eps} \leq S^K_\eps\})\Big]\leq c\eps.
\end{equation}
\end{lem}
\begin{proof}
We have the following equality
\begin{eqnarray*}
\P(T_{\eps}^K<\infty,T^K_\eps>S^K_\eps) &=& \P(T_{\eps}^K<\infty)-\P(T_{\eps}^K<\infty,T^K_\eps\leq S^K_\eps)\\
&=& \P(T_{\eps}^K<\infty)-\P(T^K_\eps\leq S^K_\eps),
\end{eqnarray*}
where we used the inclusion $\{T^K_\eps\leq S^K_\eps\}\subset \{T_{\eps}^K<\infty\}$, as $S_\eps^K$ is almost surely finite (a birth and death process with competition has a finite extinction time).
Equations \eqref{taillepopfinale} and \eqref{res_champ} ends the proof of \eqref{tildeTbiggerT}.
From Equation \eqref{taillepopfinale}, we also have that for $\eps<\bar{n}_a/2$
\begin{equation}\label{Tepsfix}
\underset{K \to \infty}{\lim} \hspace{.1cm} \P(T_\eps^K=\infty|\textnormal{Fix}^K)\leq \underset{K \to \infty}{\lim}
\P\Big( \Big|N_a^K(T^K_{\text{ext}})-K\bar{n}_a\Big|>\eps K \Big| \textnormal{Fix}^K \Big)=0.
\end{equation}
This implies that
$$ \P(T^K_\eps>S^K_\eps,\textnormal{Fix}^K)\leq \P(T^K_\eps>S^K_\eps,T^K_\eps<\infty)+\P(T^K_\eps=\infty,\textnormal{Fix}^K)\leq c\eps, $$
where we used \eqref{tildeTbiggerT}. Moreover,
\begin{eqnarray*}
\P(T^K_{\eps} \leq S^K_\eps ,(\textnormal{Fix}^K)^c)&\leq & \P(T^K_{\eps} <\infty, (\textnormal{Fix}^K)^c)\\
& =& \P(T^K_{\eps} <\infty)- \P(T_\eps^K<\infty|\textnormal{Fix}^K)\P(\textnormal{Fix}^K)
\leq c\eps,
\end{eqnarray*}
where we used \eqref{taillepopfinale}, \eqref{Tepsfix} and \eqref{proba_fix}.
\end{proof}
We also recall some results on birth and death processes whose proofs can be found in Lemma 3.1 in
\cite{schweinsberg2005random} and in \cite{athreya1972branching} p 109 and 112.
\begin{pro}
Let $Z=(Z_t)_{t \geq 0}$ be a birth and death process with individual birth and death rates $b$ and $d $. For $i \in \Z^+$,
$T_i=\inf\{ t\geq 0, Z_t=i \}$ and $\P_i$ (resp. $\E_i$) is the law (resp. expectation) of $Z$ when $Z_0=i$. Then
\begin{enumerate}
\item[$\bullet$] For $i \in \N$ and $t \geq 0$,
\begin{equation} \label{espviemort}
\E_i[Z_t]=ie^{(b-d)t}.
\end{equation}
\item[$\bullet$] For $(i,j,k) \in \Z_+^3$ such that $j \in (i,k)$,
\begin{equation} \label{hitting_times1} \P_j(T_k<T_i)=\frac{1-(d/b)^{j-i}}{1-(d/b)^{k-i}} .\end{equation}
\item[$\bullet$] If $d\neq b \in \R_+^*$, for every $i\in \Z_+$ and $t \geq 0$,
\begin{equation} \label{ext_times} \P_{i}(T_0\leq t )= \Big( \frac{d(1-e^{(d-b)t})}{b-de^{(d-b)t}} \Big)^{i}.\end{equation}
\item[$\bullet$] If $0<d<b$, on the non-extinction event of $Z$, which has probability $1-(d/b)^{Z_0}$, the following convergence holds:
\begin{equation} \label{equi_hitting} T_N/\log N \underset{N \to \infty}{\to} (b-d)^{-1}, \quad a.s. \end{equation}
\end{enumerate}
\end{pro}
Finally, we recall Lemma 3.4.3 in \cite{durrett2008probability} and Lemma 5.1 in \cite{schweinsberg2005random}. Let $d\in \N$. Then
\begin{lem}\label{lemmes_durrett}
\begin{enumerate}
\item[$\bullet$] Let $a_1,...a_d$ and $b_1,...,b_d$ be complex numbers of modulus smaller than $1$. Then
\begin{equation}\label{lemme343344} \Big|\underset{i=1}{\overset{d}{\prod}}a_i-\underset{i=1}{\overset{d}{\prod}}b_i\Big| \leq \underset{i=1}{\overset{d}{\sum}}|a_i-b_i| . \end{equation}
\item[$\bullet$]Let $V$ and $V'$ be $\{0, 1,... , d\}$-valued random variables such that $\E[V] = \E[V']$. Then, there exist random variables $\tilde{V}$ and $\tilde{V}'$
on some probability space such that $V$ and $\tilde{V}$ have the same distribution, $V$ and $\tilde{V}'$ have the same distribution, and
\begin{equation}\label{lemme51}\P( \tilde{V} \neq \tilde{V}') \leq d \max\{\P( \tilde{V} \geq 2),\P( \tilde{V}'\geq 2)\}.\end{equation}
\end{enumerate}
\end{lem}
For $0<s<1$, if $\tilde{Z}^{(s)}$ denotes a random walk with jumps $\pm 1$ where up jumps occur
with probability $1/(2-s)$ and down jumps with probability $(1-s)/(2-s)$, we denote by $\P_i^{(s)}$ the law of $\tilde{Z}^{(s)}$ when the initial state is $i \in \N$
and introduce for every $a \in \R_+$ the stopping time
\begin{equation} \label{deftauas}
\tau_a:=\inf \{ n \in \Z_+, \tilde{Z}^{(s)}_n= \lfloor a \rfloor \}.
\end{equation}
We also introduce for $\eps$ small enough and $0\leq j,k< \lfloor \eps K \rfloor$, the quantities
\begin{equation} \label{defqkl} q_{j,k}^{(s_1,s_2)}:=\frac{\P_{k+1}^{(s_1)}(\tau_{ \eps K }<\tau_k)}{\P_{k+1}^{(s_2)}(\tau_{ \eps K }<\tau_j)}=\frac{s_1}{1-(1-s_1)^{\lfloor \eps K \rfloor -k}} \frac{1-(1-s_2)^{\lfloor \eps K \rfloor -j}}{1-(1-s_2)^{k+1-j}} , \quad 0<s_1,s_2<1,
\end{equation}
whose expressions are direct consequences of \eqref{hitting_times1}.
Let us now state a technical result, which helps us to control upcrossing numbers of
the process $\tilde{N}_a^K$ before reaching the size $\lfloor\eps K\rfloor$ (see Appendix \ref{prooflemma}).
\begin{lem}\label{ineqqs}
For $a \in ]0,1/2[ $, $(s_1,s_2) \in [a,1-a]^2$, and $0\leq j \leq k < l < \lfloor \eps K \rfloor $,
\begin{equation} \label{minqk}
q^{(s_1 \wedge s_2, s_1 \vee s_2)}_{0,k} \geq s_1 \wedge s_2\quad \text{and} \quad
\Big| \frac{1}{q^{(s_1, s_2)}_{k,l}}-\frac{1}{q^{(s_2, s_1)}_{j,l}} \Big|\leq \frac{4(1+{1}/{s_2})}{ea^2 |\log (1-a)|} |s_2-s_1|+ \frac{(1-s_2)^{l+1-k}}{s_2^3}.
\end{equation}
\end{lem}
\begin{proof}The first part of (\ref{minqk}) is a direct consequence of Definiton \eqref{defqkl}.
Let $a$ be in $]0, 1/2[$ and consider functions $f_{\alpha,\beta} : x \mapsto (1-x^\alpha)/(1-x^{\beta}), (\alpha,\beta) \in \N^2, x \in [a,1-a] $.
Then for $x \in [a,1-a]$,
\begin{equation} \label{lemtech} \| f_{\alpha,\beta} ' \|_\infty \leq 4(ea^2|\log (1-a)|)^{-1}. \end{equation}
Indeed, the first derivative of $f_{\alpha,\beta}$ is:
$$ f_{\alpha,\beta}'(x)=\frac{\beta x^{\beta-1}(1-x^{\alpha})-\alpha x^{\alpha-1}(1-x^{\beta})}{(1-x^{\beta})^2}. $$
Hence, for $x \in [a,1-a]$,
$$ |f_{\alpha,\beta}'(x)|\leq \frac{\beta (1-a)^{\beta}+\alpha (1-a)^{\alpha}}{(1-a)a^2}\leq 2\frac{\beta (1-a)^{\beta}+\alpha (1-a)^{\alpha}}{a^2} ,$$
where we used that $1-x^{\beta}\geq 1-(1-a)$ and that $1-a\geq 1/2$. Adding the following inequality
$$\underset{k \in \N}{\sup}\hspace{.2cm}k(1-a)^k\leq\underset{x\in \R^+}{\sup}\hspace{.2cm}x(1-a)^x= (e|\log (1-a)|)^{-1},$$
completes the proof of \eqref{lemtech}.
From \eqref{hitting_times1}, we get for $0<s<1$ and $0\leq j\leq k<\lfloor \eps K \rfloor$,
\begin{eqnarray}\label{diff_k0} \Big| \P_{l+1}^{(s)}(\tau_{ \eps K }<\tau_k)-\P_{l+1}^{(s)}(\tau_{ \eps K }<\tau_j)
\Big|&=&\frac{(1-(1-s)^{k-j})((1-s)^{l+1-k}-(1-s)^{\lfloor \eps K \rfloor-k})}{(1-(1-s)^{\lfloor \eps K \rfloor-k})(1-(1-s)^{\lfloor \eps K \rfloor-j})} \nonumber \\
&\leq & (1-s)^{l+1-k}s^{-2}.
\end{eqnarray}
The triangle inequality leads to:
\begin{eqnarray*}
\Big| \frac{1}{q^{(s_1, s_2)}_{k,l}}-\frac{1}{q^{(s_2, s_1)}_{j,l}} \Big|&=&\Big| \frac{\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_k)}
{\P_{l+1}^{(s_1)}(\tau_{ \eps K }<\tau_l)}-\frac{\P_{l+1}^{(s_1)}(\tau_{ \eps K }<\tau_j)}{\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_l)} \Big|\\
&\leq&\Big| \frac{1}{\P_{l+1}^{(s_1)}(\tau_{ \eps K }<\tau_l)}-\frac{1}{\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_l)}
\Big|\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_k)\\
&& +\frac{1}{\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_l)} \Big|\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_k)-\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_j)\Big|\\
&& +\frac{1}{\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_l)} \Big|\P_{l+1}^{(s_1)}(\tau_{ \eps K }<\tau_j)-\P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_j)\Big|.
\end{eqnarray*}
Noticing that $ \P_{l+1}^{(s_2)}(\tau_{ \eps K }<\tau_l)\geq \P_{l+1}^{(s_2)}(\tau_{ \infty }<\tau_l) = \P_{1}^{(s_2)}(\tau_{ \infty }<\tau_0)=s_2 ,$
and using (\ref{diff_k0}) and the Mean Value Theorem with (\ref{lemtech}), we get the second part of (\ref{minqk}).
\end{proof}
\section{Proofs of Lemmas \ref{uphold} and \ref{lemmajpro}} \label{prooflemma}
\begin{proof}[Proof of Equation \eqref{majcov}]
In the whole proof, the integer $n_A$ denotes the state of $\tilde{N}_A$ and thus belongs to $I_\eps^K$
which has been defined in \eqref{compact1}. ${\P}_{(n_A,n_a)}$ (resp. $\hat{\P}_{(n_A,n_a)}$) denotes the probability ${\P}$
(resp. $\hat{\P}$) when $(\tilde{N}_A(0),\tilde{N}_a(0))=(n_A,n_a) \in \Z_+^2$. We introduce for $u \in \R_+$ the hitting time of $\lfloor u \rfloor$ by the process
$\tilde{N}_a$:
\begin{equation}\label{tpssigma}
\sigma^K_u:=\inf \{ t\geq 0, \tilde{N}_a^K(t)= \lfloor u \rfloor\}.
\end{equation}
Let $(i,j,k)$ be in $\Z_+^3$ with $j<k< \lfloor \eps K \rfloor$. Between jumps $\zeta_j^K$ and $J^K$ the process $\tilde{N}_a$ necessarily
jumps from
$k$ to $k+1$. Then, either it reaches $\lfloor \eps K \rfloor$ before returning to $k$, either it again jumps from
$k$ to $k+1$ and so on. Thus we approximate the probability that there is only one jump from $k$ to $k+1$ by comparing $U_{j,k}^{(K,2)}$
with geometrically distributed random variables. As we do not know the value of $\tilde{N}_A$ when $\tilde{N}_a$ hits $k+1$ for the first time,
we take the maximum over all the possible values in $I_\eps^K$. Recall Definition \eqref{defhatP}. We get,
as $\{\tilde{T}_{\eps}^K<\sigma_{j}^K\}\subset\{\tilde{T}_{\eps}^K<\sigma_{k}^K\}\subset\{\tilde{T}_{\eps}^K<\infty\}$:
\begin{eqnarray*}
\hat{\P}(U_{j,k}^{(K,2)}=1|U_{j}^{K}=i)&\leq & \underset{n_A\in I_\eps^K}{\sup}\hat{\P}_{(n_A,k+1)}(\tilde{T}_{\eps}^K<\sigma_{k}^K|
\tilde{T}_{\eps}^K<\sigma_{j}^K,U_{j}^{K}=i)\\
&=&\underset{n_A\in I_\eps^K}{\sup}{\P}_{(n_A,k+1)}(\tilde{T}_{\eps}^K<\sigma_{k}^K|
\tilde{T}_{\eps}^K<\sigma_{j}^K,U_{j}^{K}=i)\\
&=&\underset{n_A\in I_\eps^K}{\sup}\frac{{\P}_{(n_A,k+1)}(\tilde{T}_{\eps}^K<\sigma_{k}^K,U_{j}^{K}=i)}
{{\P}_{(n_A,k+1)}(
\tilde{T}_{\eps}^K<\sigma_{j}^K,U_{j}^{K}=i)}\\
&=&\underset{n_A\in I_\eps^K}{\sup}\frac{{\P}_{(n_A,k+1)}(\tilde{T}_{\eps}^K<\sigma_{k}^K)}
{{\P}_{(n_A,k+1)}(
\tilde{T}_{\eps}^K<\sigma_{j}^K)},
\end{eqnarray*}
where we used that on the events $\{\tilde{T}_{\eps}^K<\sigma_{j}^K\}$ and $\{\tilde{T}_{\eps}^K<\sigma_{k}^K\}$ the jumps from $j$ to $j+1$
belong to the past, and Markov Property.
Coupling \eqref{couplage12} allows us to compare these conditional probabilities with the probabilities of the same
events under $\P^{(s_-(\eps))}$ and $\P^{(s_+(\eps))}$, and recalling \eqref{defqkl} we get
$$ \hat{\P}(U_{j,k}^{(K,2)}=1|U_{j}^{K}=i)\leq
\frac{\P_{k+1}^{(s_+(\eps))}(\tau_{\eps K}<\tau_{k})}
{\P_{k+1}^{(s_-(\eps))}(\tau_{\eps K}<\tau_j )}= q^{(s_+(\eps),s_-(\eps))}_{j,k}. $$
In an analogous way we show that $ \hat{\P}(U_{j,k}^{(K,2)}=1|U_{j}^{K}=i)\geq q^{(s_-(\eps),s_+(\eps))}_{j,k}$. We deduce that we can construct
two geometrically distributed random variables $G_1$ and $G_2$, possibly on an enlarged space,
with respective parameters $q^{(s_+(\eps),s_-(\eps))}_{j,k}\wedge 1$ and $q^{(s_-(\eps),s_+(\eps))}_{j,k}$ such that on the event $\{ U_{j}^{K}=i \}$,
\begin{equation}\label{compaU'|Uj1} G_1\leq U_{j,k}^{(K,2)} \leq G_2. \end{equation}
For the same reasons we obtain $ q^{(s_-(\eps),s_+(\eps))}_{j,k}\leq \hat{\P}(U_{j,k}^{(K,2)}=1)\leq q^{(s_+(\eps),s_-(\eps))}_{j,k} \wedge 1$,
and again we can construct two random variables $G'_1\overset{d}{=}G_1$ and $G'_2\overset{d}{=}G_2$ such that
\begin{equation}\label{compaU'} G'_1 \leq U_{j,k}^{(K,2)} \leq G'_2. \end{equation}
Recall that $U_{0,k}^{(K,2)}=U_{k}^{K}$. Hence taking $j=0$ and adding the first part of Equation \eqref{minqk} give the first inequality of \eqref{majcov}.
According to Definition \eqref{def_s_-s_+1}, $|s_+(\eps)-s_-(\eps)|\leq c\eps$ for a finite $c$.
Hence Equations \eqref{compaU'|Uj1}, \eqref{compaU'} and \eqref{minqk} entail the existence of a finite $c$ such that for $\eps$ small
enough $| \hat{\E}[U_{j,k}^{(K,2)}|U_{j}^{K}=i ]-\hat{\E}[U_{j,k}^{(K,2)} ]| \leq c\eps+ {(1-s_-(\eps))^{k+1-j}}/{s_-^3(\eps)}$. Thus according
to the first part of Equation \eqref{majcov},
\begin{eqnarray} \label{cov1} \Big|\hat{\cov}(U_{j,k}^{(K,2)},U_{j}^{K})\Big| & \leq & \underset{i \in \N^*}{\sum}
i\hat{\P}(U_{j}^{K}=i) \Big| \hat{\E}[U_{j,k}^{(K,2)}|U_{j}^{K}=i ]-\hat{\E}[U_{j,k}^{(K,2)} ] \Big| \nonumber \\
&\leq & \frac{2}{s^2_-(\eps)}\Big(c\eps+ \frac{(1-s_-(\eps))^{k+1-j}}{s_-^3(\eps)} \Big) ,\end{eqnarray}
where we use that $U_{j}^{K}\leq (U_{j}^{K})^2$. This ends the proof of \eqref{majcov}.
\end{proof}
\begin{proof}[Proof of Equation \eqref{E''U}]
Definitions \eqref{defdaba} and Coupling \eqref{couplage12} ensure that for $n_A \in I_\eps^K$, $\eps$ small enough and
$K$ large enough,
\begin{eqnarray*} \hat{\P}_{(n_A,k)}(\tilde{N}_a(dt)=k+1)
&=& \frac{\P_{(n_A,k+1)}(\tilde{T}_\eps^K<\infty )}
{\P_{(n_A,k)}(\tilde{T}_\eps^K<\infty )}{\P}_{(n_A,k)}(\tilde{N}_a(dt)=k+1)
\\&\geq &\frac{\P_{k+1}^{(s_-(\eps))}(\tau_{\eps K}<\tau_0)}{\P_{k}^{(s_+(\eps))}(\tau_{\eps K}<\tau_0)}f_a k dt\\
& = & \frac{1-(1-s_-(\eps))^{k+1}}{1-(1-s_-(\eps))^{\lfloor \eps K \rfloor}}
\frac{1-(1-s_+(\eps))^{\lfloor \eps K \rfloor}}{1-(1-s_+(\eps))^{k}}f_akdt\\
& \geq & s_-^2(\eps)f_akdt,
\end{eqnarray*}
and
\begin{eqnarray*} \hat{\P}_{(n_A,k)}(\tilde{N}_A(dt)\neq n_A)&\leq &
\frac{\P_{k}^{(s_+(\eps))}(\tau_{\eps K}<\tau_0)}{\P_{k}^{(s_-(\eps))}(\tau_{\eps K}<\tau_0)}
{\P}_{(n_A,k)}(\tilde{N}_A(dt)\neq n_A)
\\ & \leq & (1+c\eps)2f_A\bar{n}_AKdt
\end{eqnarray*}
for a finite $c$, where we use \eqref{hitting_times1} and that $D_A+C_{A,A}\bar{n}_A=f_A$. Thus for $\eps$ small enough:
\begin{equation*}\label{holds}\hat{\P}(\tilde{N}_a(\tau_{m+1}^K)\neq \tilde{N}_a(\tau_{m}^K)| \tilde{N}_a(\tau_{m}^K)=k)\geq \frac{s_-^2(\eps)f_ak}{3f_A\bar{n}_AK}. \end{equation*}
If $D_{k}^{K}$ denotes the downcrossing number from $k$ to $k-1$ before $\tilde{T}_\eps^K$,
then under the probability $\hat{\P}$, we can bound $U_{k}^{K}+D_{k}^{K}+H_{k}^{K}$ by the sum of
$U_{k}^{K}+D_{k}^{K}$ independent geometrically distributed random variables $G_i^K$ with parameter $ {s_-^2(\eps)f_ak}/{3f_A\bar{n}_AK}$ and
$ H_{k}^{K}\leq {\sum}_{1\leq i\leq U_{k}^{K}+D_{k}^{K}}(G_i^K-1). $
Let us notice that if $k\geq 2$, $D_{k}^{K}=U_{k-1}^{K}-1$, and $D_{1}^{K}=0$. Using the first part of \eqref{majcov} twice we get
$$ \hat{\E}[H_{k}^{K}] \leq \Big( \frac{4}{s^2_-(\eps)}-1 \Big) \Big(\frac{3f_A\bar{n}_AK}{s_-^2(\eps)f_ak}-1\Big),$$
which ends the proof of the first inequality in \eqref{E''U}.\\
\noindent As the mutant population
size is not Markovian we cannot use symmetry and the Strong Markov Property to control the dependence of jumps before and after the last visit to a
given state as in \cite{schweinsberg2005random}. Hence we describe the successive excursions of $\tilde{N}_a^K$ above a given level to
get the last
inequality in \eqref{E''U}. Let
$\tilde{U}_{j,k}^{(i)}$ be the number of jumps from
$k$ to $k+1$ during the $i$th excursion above $j$.
We first bound the expectation $\hat{\E}[ (\tilde{U}_{j,k}^{(i)})^2 ]$. During an excursion above $j$, $\tilde{N}_a$ hits $j+1$, but
we do not know the value of $\tilde{N}_A$ at this time. Thus we take the maximum value for the probability when $n_A$ belongs to
$I_\eps^K$, and
$$\hat{\P} (\tilde{U}_{j,k}^{(i)}\geq 1 ) \leq {\sup}_{n_A\in I_\eps^K} \hat{\P}_{(j+1,n_A)}
( \sigma_{k+1}^K<\sigma_j^K|\sigma_j^K<\tilde{T}_{\eps}^K).$$
Then using Coupling \eqref{couplage12} and Definition \eqref{defhatP} we obtain
\begin{eqnarray*}
{\hat\P} \Big(\tilde{U}_{j,k}^{(i)}\geq 1 \Big) & \leq &
\sup_{n_A\in I_\eps^K} \frac{\P_{(j+1,n_A)}(\sigma_{k+1}^K<\sigma_j^K<\tilde{T}_{\eps}^K<\infty)}
{\P_{(j+1,n_A)}(\sigma_j^K<\tilde{T}_{\eps}^K<\infty)}
\\
& \leq & \frac{\P_{j}^{(s_+(\eps))}(\tau_{ \eps K }<\tau_{0})\P_{k+1}^{(s_-(\eps))}(\tau_{j}<\tau_{ \eps K })\P_{j+1}^{(s_+(\eps))}(\tau_{k+1}<\tau_{ j })}
{\P_{j}^{(s_-(\eps))}(\tau_{ \eps K }<\tau_{0})\P_{j+1}^{(s_+(\eps))}(\tau_{j}<\tau_{ \eps K })}.
\end{eqnarray*}
Adding Equation \eqref{hitting_times1} we finally get
\begin{equation}\label{uijk1} \hat{\P} \Big(\tilde{U}_{j,k}^{(i)}\geq 1 \Big) \leq \frac{(1-s_-(\eps))^{k+1-j}}{s_-(\eps)(1-s_+(\eps))} . \end{equation}
Moreover if $\tilde{U}_{j,k}^{(i)}\geq 1$, $\tilde{N}_a$ necessarily hits $k$ after its first jump from $k$ to $k+1$, and before its return to $j$.
Using the same techniques as before we get:
\begin{eqnarray*}
\hat{\P} \Big(\tilde{U}_{j,k}^{(i)}= 1 |\tilde{U}_{j,k}^{(i)}\geq 1 \Big)&\geq & \underset{n_A \in I_\eps^K}{\inf}
\hat{\P}_{(n_A,k)}(\sigma_{j}^K<\sigma_{k+1}^K |\sigma_{j}^K<\tilde{T}_{\eps}^K ) \\
&\geq &
\frac{\P_{j}^{(s_-(\eps))}(\tau_{ \eps K }<\tau_{0})\P_{k}^{(s_+(\eps))}(\tau_{j}<\tau_{k+1})}
{\P_{j}^{(s_+(\eps))}(\tau_{ \eps K }<\tau_{0})\P_{k}^{(s_-(\eps))}(\tau_{j}<\tau_{\eps K })},
\end{eqnarray*}
which yields
\begin{equation}\label{uijk2} \hat{\P} \Big(\tilde{U}_{j,k}^{(i)}= 1 |\tilde{U}_{j,k}^{(i)}\geq 1 \Big)
\geq s_-(\eps)s_+(\eps)\Big(\frac{1-s_+(\eps)}{1-s_-(\eps)}\Big)^{k-j}\geq s_-^2(\eps)\Big(\frac{1-s_+(\eps)}{1-s_-(\eps)}\Big)^{k-j}=:q.\end{equation}
Hence, given that $\tilde{U}_{j,k}^{(i)}$ is non-null, $\tilde{U}_{j,k}^{(i)}$ is smaller than a geometrically distributed random variable with
parameter $q$. In particular,
$$ \E \Big[ (\tilde{U}_{j,k}^{(i)})^2 |\tilde{U}_{j,k}^{(i)}\geq 1 \Big]\leq \frac{2}{q^2}= \frac{2}{s^4_-(\eps)}\Big(\frac{1-s_-(\eps)}{1-s_+(\eps)}\Big)^{2(k-j)}. $$
Adding Equation (\ref{uijk1}) and recalling that $|s_+(\eps)-s_-(\eps)|\leq c\eps$ for a finite $c$ yield
$$ \hat{\E} \Big[(\tilde{U}_{j,k}^{(i)})^2\Big]\leq \frac{2 \lambda_\eps^{k-j}}{ s^5_-(\eps)(1-s_+(\eps))} , \quad \text{where} \quad \lambda_\eps:=\frac{(1-s_-(\eps))^3}{(1-s_+(\eps))^2} <1.$$
Using that for $n \in \N$ and $(x_i,1\leq i \leq n) \in \R^n$, $(\sum_{1\leq i \leq n}x_i)^2 \leq n\sum_{1\leq i \leq n}x_i^2$ and that the number of excursions above $j$ before $\tilde{T}_\eps^K$ is $U_{j}^{K}-1$, we get
$$ \hat{\E} \Big[({U}_{j,k}^{(K,1)})^2\Big]\leq \hat{\E} \Big[U_{j}^{K}-1\Big] \frac{2 \lambda_\eps^{k-j}}{ s^5_-(\eps)(1-s_+(\eps))} \leq \frac{4 \lambda_\eps^{k-j}}{ s^7_-(\eps)(1-s_+(\eps))}, $$
where we used the first part of Equation \eqref{majcov}. This ends the proof of Equation \eqref{E''U}.\end{proof}
\begin{proof}[Proof of Equation \eqref{espnoreco}]
Definition \eqref{defqkl}, Inequality \eqref{compaU'} and Equation \eqref{hitting_times1} yield:
\begin{equation*} \label{exprexpli} r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1} \frac{\hat{\E}[U_k^K]}{k+1} \geq r_K\underset{k=1}{\overset{\lfloor \eps K \rfloor -1}{\sum}}
\Big[(k+1)q_{0,k}^{(s_+(\eps),s_-(\eps))}\Big]^{-1}=\frac{r_K (A-B)}{s_+(\eps)({1-(1-s_-(\eps))^{\lfloor \eps K \rfloor}})}
, \end{equation*}
with
$$ A:=\underset{k=1}{\overset{\lfloor \eps K \rfloor -1}{\sum}}\frac{1-(1-s_-(\eps))^{k+1}}{k+1}, \quad \text{and} \quad
B:=(1-s_+(\eps))^{\lfloor \eps K \rfloor}\underset{k=1}{\overset{\lfloor \eps K \rfloor -1}{\sum}}\frac{1-(1-s_-(\eps))^{k+1}}{(1-s_+(\eps))^{k}(k+1)}. $$
For large $K$, $A=\log (\eps K)+O(1)$, and for every $u>1$ there exists $D(u)<\infty$ such that
$\sum_{k=1}^{\lfloor \eps K \rfloor}u^k/(k+1)\leq D(u)u^{\lfloor \eps K \rfloor }/\lfloor \eps K \rfloor $. This implies that
$B \leq {c}/{\lfloor \varepsilon K \rfloor}$ for a finite $c$.
Finally, by definition, for $\eps$ small enough,
$|s_+(\eps)-S_{aA}/f_a|\leq c\eps$ for a finite constant $c$. This yields
$$ r_K \sum_{k=1}^{\lfloor \eps K \rfloor -1} \frac{\hat{\E}[U_k^K]}{k+1}\geq (1-c\eps) \frac{r_Kf_a \log K }{S_{aA}}$$
for a finite $c$ and concludes the proof for the lower bound. The upper bound is obtained in the same way. This ends the proof of Lemma \ref{uphold}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemmajpro}]
We use Coupling \eqref{couplage11} to control the growing of the mutant population during the first period
of invasion, and the semi-martingale decomposition in Proposition \ref{mart_prop} to bound the fluctuations of $M_A$.
The hitting time of $\lfloor \eps K \rfloor$ and non-extinction event of $Z^*_\eps$ are denoted by:
\begin{equation*}\label{hittingextetoile}
T^{*,K}_\eps=\inf \{ t \geq 0, Z^*_\eps(t)= \lfloor \varepsilon K \rfloor \}, \quad \text{and} \quad F^*_\eps=\Big\{ Z^*_\eps(t)\geq 1, \forall t\geq 0 \Big\}, \quad *\in \{-,+\}.\end{equation*}
{Let us introduce the difference of probabilities
$$B_\eps^K:=\P\Big(\sup_{t \leq T^K_\eps}
\Big|P^K_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}\Big|>\sqrt{\varepsilon}, T^K_\eps <\infty\Big)-\P\Big(\sup_{t \leq T^K_\eps}
\Big|P^K_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}\Big|>\sqrt{\varepsilon}, F^-_\eps, T^K_\eps \leq S^K_\eps \Big).$$
Then $B_\eps^K$ is nonnegative and we have
\begin{eqnarray*}
\label{majA} B_\eps^K &\leq &
\P(T^K_\eps <\infty )-\P(F^-_\eps, T^K_\eps \leq S^K_\eps )\\
&=& \P(T^K_\eps <\infty )-\P( T^K_\eps \leq S^K_\eps )+\P(T^{(+,K)}_\eps<\infty, T^K_\eps \leq S^K_\eps )-\P(F^-_\eps, T^K_\eps \leq S^K_\eps ),
\end{eqnarray*}
where the inequality comes from the inclusion $\{F^-_\eps, T^K_\eps \leq S^K_\eps\} \subset \{T^K_\eps <\infty \}$, as $S_\eps^K$ is almost surely finite.
The equality is a consequence of Coupling \eqref{couplage11}
which ensures that on the event $\{ T^K_\eps \leq S^K_\eps \}$, $\{T^{(+,K)}_\eps<\infty\}$ holds.
By noticing that
$$ \{F^-_\eps , T^K_\eps \leq S^K_\eps \} \subset \{ T^{(-,K)}_\eps <\infty ,T^K_\eps \leq S^K_\eps \} \subset \{ T^{(+,K)}_\eps <\infty ,T^K_\eps \leq
S^K_\eps \} $$
we get the bound
\begin{equation}
\label{majA} B_\eps^K \leq
\P(T^K_\eps <\infty )-\P( T^K_\eps \leq S^K_\eps )+\P(T^{(+,K)}_\eps<\infty )-\P(F^-_\eps).
\end{equation}
The values of the two first probabilities are approximated in \eqref{taillepopfinale} and \eqref{res_champ},
and \eqref{hitting_times1} implies that $\P(T^{+,K}_\eps<\infty)-\P(F^-_\eps)=s_+(\eps)/(1-(1-s_+(\eps))^{\lfloor \eps K \rfloor})-s_-(\eps)$. Hence
\begin{equation}\label{restri} \underset{K \to \infty}{\limsup} \ B_\eps^K \leq c\eps, \end{equation}
where $c$ is finite for $\eps$ small enough, which allows us to focus on the intersection with the event $\{F^-_\eps,T^K_\eps
\leq S^K_\eps \}$.} We recall that $|N_{Ab_1}N_{ab_2}-N_{Ab_2}N_{ab_1}|\leq N_A N_a$, and that Assumption \ref{condweak} holds. Then \eqref{defM}
and \eqref{TKTKeps1} imply for $\eps$ small enough
$$\underset{t \leq T^K_\eps \wedge S_\eps^K }{\sup}\Big|P_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}-M_A(t)\Big|\leq r_Kf_a T^K_\eps
\underset{t \leq T^K_\eps \wedge S_\eps^K }{\sup} \left\{ \frac{N_a(t)}{N_A(t)} \right\}\leq
\frac{r_K f_a \varepsilon T^{K}_\eps}{\bar{n}_A-2\varepsilon {C_{A,a}}/{C_{A,A}}} \leq \frac{c\eps T^{K}_\eps}{\log K},$$
for a finite $c$. Moreover, $F^-_\eps \cap \{ T^K_\eps \leq S^K_\eps \} \subset F^-_\eps \cap \{ T^K_\eps \leq T_\eps^{(-,K)} \}$. Thus we get
$$\P\Big(\sup_{t \leq T^K_\eps}\Big|P_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}-M_A(t)\Big|>\frac{\sqrt{\varepsilon}}{2},F^-_\eps,T^K_\eps \leq S^K_\eps\Big)\leq
\P \Big( \frac{c \varepsilon T^{(-,K)}_\eps}{\log K}>\sqrt{\varepsilon}/2,F^-_\eps\Big).$$
Finally, Equation (\ref{equi_hitting}) ensures that $\lim_{K \to \infty}T^{(-,K)}_\eps/\log K = s_-(\eps)^{-1}$ a.s. on the non-extinction event $F^-_\eps$.
Thus for $\eps$ small enough,
\begin{equation} \label{partie1}\lim_{K \to \infty}\P\Big(\sup_{t \leq T^K_\eps}\Big|P_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}-M_A(t)\Big|>\frac{\sqrt{\varepsilon}}{2},F^-_\eps,T^K_\eps \leq
S^K_\eps\Big) = 0 . \end{equation}
To control the term $|M_A|$, we introduce the sequence of real numbers $t_K=(2f_a\log K )/S_{aA}$:
\begin{eqnarray*} \label{decoup_pro} \P\Big(\underset{t \leq T^K_\eps}{\sup}|M_A(t)|>\frac{\sqrt{\varepsilon}}{2},F^-_\eps,T^K_\eps \leq
S^K_\eps\Big)\leq \P\Big(\underset{t \leq T^K_\eps}{\sup}|M_A(t)|>\frac{\sqrt{\varepsilon}}{2},T^K_\eps \leq S^K_\eps \wedge t_K \Big)+\P(T^K_\eps > t_K,F^-_\eps).\end{eqnarray*}
Equation \eqref{def_s_-s_+1} yields for $\eps$ small enough, $t_K.s_-(\eps)/\log K>3/2$. Thus thanks to (\ref{equi_hitting}) we get,
\begin{equation*} \label{star}\lim_{K \to \infty} \P(T^{K}_\eps > t_K,F^-_\eps)\leq \lim_{K \to \infty} \P(T^{-,K}_\eps > t_K,F^-_\eps)= 0 .\end{equation*}
Applying Doob's maximal inequality to the submartingale $|M_A|$ and \eqref{crocheten1K1} we get:
\begin{eqnarray*}\label{partie2}
\P(\underset{t \leq T^K_\eps}{\sup}|M_A(t)|>\sqrt{\varepsilon}/2, T^K_\eps \leq S^K_\eps\wedge t_K ) & \leq & \P(\underset{t \leq t_K}{\sup}|M_A(t \wedge T^K_\eps
\wedge S^K_\eps)|>\sqrt{\varepsilon}/2 ) \nonumber\\
& \leq & \frac{4}{\varepsilon}\E\Big[ \langle M_A\rangle_{t_K \wedge T^K_\eps \wedge S^K_\eps} \Big]\\
&\leq & \frac{4}{\eps}\frac{C(A,2\bar{n}_A) t_K}{(\bar{n}_A-2\eps C_{A,a}/C_{A,A})K},
\end{eqnarray*}
which goes to $0$ at infinity. Adding Equation \eqref{partie1} leads to:
$$ \lim_{K \to \infty}\P\Big(\sup_{t \leq T^K_\eps}\Big|P_{A,b_1}(t)-\frac{z_{Ab_1}}{z_A}\Big|>{\sqrt{\varepsilon}},F^-_\eps,T^K_\eps \leq
S^K_\eps\Big)=0. $$
Finally, Equation \eqref{restri} complete the proof of Lemma~\ref{lemmajpro}.
\end{proof}
{\bf Acknowledgements:} {\sl The author would like to thank Jean-François Delmas and Sylvie M\'el\'eard for their help and their careful
reading of this paper. She also wants to thank Sylvain Billiard and Pierre Collet for fruitful discussions during her work and
several suggestions, so as the two anonymous referees for several corrections and improvements. This work was partially funded by project MANEGE `Mod\`eles
Al\'eatoires en \'Ecologie, G\'en\'etique et \'Evolution'
09-BLAN-0215 of ANR (French national research agency) and Chair Mod\'elisation Math\'ematique et Biodiversit\'e VEOLIA-Ecole Polytechnique-MNHN-F.X.}
\bibliographystyle{abbrv} | {"config": "arxiv", "file": "1402.4104/revisedversion.tex"} |
TITLE: Consider $u_t - \Delta u = f(u)$ and $u=0$ on $\partial\Omega \times (0,\infty)$. Show if $u(x,0) \geq 0$, then $u(x,t) \geq 0$
QUESTION [1 upvotes]: The question was asked here ($u$ is a $C^2$ solution of $u_t - \Delta u = f(u)$ and $u=0$ on $\partial\Omega \times (0,\infty)$. Show if $u(x,0) \geq 0$, then $u(x,t) \geq 0$)
However, my question is if $u(x,0)\leq C$ for all $x \in \Omega$, then how to show that $u(x,t)\leq Ce^{Mt}$ for all $x \in \Omega$ and $t>0$?
REPLY [1 votes]: Here's an idea you can try. Let $v(x,t)$ solve $\partial_tv-\Delta v=f(v)$ for $v(x,0)=C$, and $w=v-u$. Then $w$ solves $\partial_tw-\Delta w=f(v)-f(u)$. By the fundamental theorem of calculus,
$$
f(v)-f(u)=(v-u)\int_0^1 f'((1-s)u+sv)ds=:w\,F,
$$
where $F(x,t)=\int_0^1f'((1-s)u(x,t)+sv(x,t))ds$. So $w$ solves
$$
\partial_tw-\Delta w=wF.
$$
But $wF=0$ when $w=0$, and $w(x,0)\ge 0$, so the supersolution argument you linked to seems to apply. Note that $v(x,t)\le Ce^{Mt}$, which is straightforward to show, since $v(x,t)=V(t)$ solves $V'(t)=f(V(t))$. | {"set_name": "stack_exchange", "score": 1, "question_id": 2380914} |
TITLE: Find the relation between the volumes of a cone and inscribed sphere
QUESTION [0 upvotes]: I have a question that I've been working upon for a long time but in vain. Can you help me.
Determine the relation between the volume of a con circumscribed to a regular tethadron and the volume of a sphere inscribed in tethadron.
What I tried to do is:!
You'll help me a lot! Thank you !
REPLY [1 votes]: Working quickly, with tetrahedral edges 1 unit long, I found the base of the cone as $1\over\sqrt{3}$, the height of the cone as $\sqrt{2\over 3}$, and the radius of the sphere as $1\over 2\sqrt{3}$. If your answers differ, I'll redo it more carefully. | {"set_name": "stack_exchange", "score": 0, "question_id": 725001} |
TITLE: Adjunct for solution operator of Poisson problem
QUESTION [0 upvotes]: Consider the Poisson problem with Neumann boundary conditions
$\Delta u = f$ on $\Omega \subset \mathbb{R}^d$
$\partial_{\nu}u = 0$ on $\partial \Omega$
where $\Omega$ is Lipschitz and $f \in L^2(\Omega)$. Now we know that the problem has unique solution $u \in H_0^1(\Omega)$ and that the solution operator is linear and bounded.
We can then interpret the solution operator as a function
$S\colon L^2(\Omega) \to L^2(\Omega), \, f \mapsto u$
this is a well defined, bounded and linear map. My question now is: What is the Hilbert adjunct of $S$?
Per definition we have to find $S^\ast$ such that,
$\langle f, \, S^\ast v \rangle = \langle S f, \, v \rangle = \langle u, \, v \rangle$
for every $v \in L^2(\Omega)$. Remarkably I was unable to find anything about that in my literature.
Looking at the weak formulation of the problem, I might have guessed the adjunct is in direct relation to the differential operator, the Laplacian in this case, but I am not sure how, because $v$ is in $L^2(\Omega)$.
A related question would be if we could do something similar for the boundary data. That is having a problem
$\Delta u = 0$ on $\Omega \subset \mathbb{R}^d$
$\partial_{\nu}u = g$ on $\partial \Omega$
for $g \in L^2(\partial \Omega)$. Can now say something about the Hilbert adjunct $B^\ast$ of
$B\colon L^2(\partial \Omega) \to L^2(\Omega), \, g \mapsto u$?
REPLY [1 votes]: Let $f, g\in L^2(\Omega)$ and $u = Sf, v = Sg$ (to be understood in the weak sense). Then
\begin{align*}
\langle Sf, g\rangle & = \int_\Omega ug\, dx = \int_\Omega \nabla v\cdot\nabla u\, dx \\
\langle f, Sg\rangle & = \int_\Omega fv\, dx = \int_\Omega \nabla u\cdot\nabla v\, dx.
\end{align*} | {"set_name": "stack_exchange", "score": 0, "question_id": 3936830} |
TITLE: How to Integrate $ \int^{\pi/2}_{0} x \ln(\cos x) \sqrt{\tan x}\,dx$
QUESTION [11 upvotes]: Evaluate
$$\displaystyle \int^{\frac{\pi}{2}}_{0} x \ln(\cos x) \sqrt{\tan x}dx$$
Unfortunately, I have no idea on how to integrate this and thus cannot provide any inputs on my own. The only thing I noticed was that suppose we just had to find $$\int_0^{\pi/2} \ln(\sin(x))\ln(\cos(x)) dx$$ we could have written $$I(a,b)=\int_0^{\pi/2}\sin^a(x)cos^b(x)dx$$ converted this into the Beta and then the Gamma Function, taken its partial derivatives with respect to $a$ and $b$, and finally plugged in $a=b=0$ to get our answer. $$$$ I would be truly grateful if somebody could please help me solve this Integral. Many thanks!
REPLY [1 votes]: $$ I=-\frac{\pi}{8}\sqrt{2}({\pi}\ln2+4G+\frac{\pi^2}{3}+3\ln^22)$$
Where G is Catalan's constant. | {"set_name": "stack_exchange", "score": 11, "question_id": 1319849} |
\begin{document}
\maketitle
\begin{abstract}
A proper vertex coloring of a graph is said to be \emph{locally
identifying} if the sets of colors in the closed neighborhood of
any two adjacent non-twin vertices are distinct. The lid-chromatic
number of a graph is the minimum number of colors used by a locally
identifying vertex-coloring. In this paper, we prove that for any
graph class of bounded expansion, the lid-chromatic number is
bounded. Classes of bounded expansion include minor closed classes
of graphs. For these latter classes, we give an alternative proof to
show that the lid-chromatic number is bounded. This leads to an
explicit upper bound for the lid-chromatic number of planar graphs.
This answers in a positive way a question of Esperet~\emph{et al.}
[L.~Esperet, S.~Gravier, M.~Montassier, P.~Ochem and A.~Parreau.
Locally identifying coloring of graphs. \emph{Electronic Journal of
Combinatorics}, {\bf 19(2)}, 2012.].
\end{abstract}
\section{Introduction}
A vertex-coloring is said to be \emph{locally identifying} if
$(i)$ the vertex-coloring is proper (i.e. no adjacent vertices receive
the same color), and $(ii)$ for any adjacent vertices $u,v$, the set of
colors assigned to the closed neighborhood of $u$ differs from the set
of colors assigned to the closed neighborhood of $v$ whenever these
neighborhoods are distinct. The \emph{locally identifying chromatic number} of
the graph $G$ (or lid-chromatic number, for short), denoted by
$\chi_{lid}(G)$, is the smallest number of colors required in any
locally identifying coloring of $G$.
Locally identifying colorings of graphs have been recently introduced
by Esperet et al.~\cite{EGMOP12} and later studied by Foucaud et
al.~\cite{FHLPP12}. They are related to identifying codes
\cite{KCL98,lobs}, distinguishing colorings \cite{BRS03,BS97,CHS96}
and locating-colorings~\cite{CEHSZ02}. For example, upper bounds on
lid-chromatic number have been obtained for bipartite graphs,
$k$-trees, outerplanar graphs and bounded degree graphs.
An open question asked by Esperet et al.~\cite{EGMOP12} was to know whether
$\chi_{lid}$ is bounded for the class of planar graphs. In this paper,
we answer positively to this question proving more generally that
$\chi_{lid}$ is bounded for any class of bounded expansion.
In Section~\ref{sec:treedepth}, we first give a tight bound of
$\chi_{lid}$ in term of the tree-depth. Then we use the fact that any
class of bounded expansion admits a low tree-depth coloring (that is a
$k$-coloring such that each triplet of colors induces a graph of
tree-depth $3$, for some constant $k$) to prove that it has bounded
lid-chromatic number.
In Section~\ref{sec:minor}, we focus on minor closed classes of graphs
which have bounded expansion and give an alternative bound on the
lid-chromatic number, which gives an explicit bound for planar graphs.
The next section is devoted to introduce notation and preliminary
results.
\section{Notation and preliminary results}
Let $G=(V,E)$ be a graph. For any vertex $u$, we denote by $N_G(u)$
its \emph{neighborhood} in $G$ and by $N_G[u]$ its \emph{closed
neighborhood} in $G$ ($u$ together with its adjacent vertices). The
notion of neighborhood can be extended to sets as follows: for
$X\subseteq V$, $N_G[X] = \set{w \in V(G) \mid \exists v\in X, w\in
N[v]}$ and $N_G(X) = N_G[X]\setminus X$. When the considered graph
is clearly identified, the subscript is dropped.
The \emph{degree} of vertex $u$ is the size of its neighborhood. The
\emph{distance} between two vertices $u$ and $v$ is the number of
edges in a shortest path between $u$ and $v$. For $X\subseteq V$, we
denote by $G[X]$ the subgraph of $G$ \emph{induced by} $X$.
We say that two vertices $u$ and $v$ are \emph{twins} if $N[u]=N[v]$
(although they are often called \emph{true twins} in the literature,
we call them \emph{twins} for convenience). In particular, $u$ and
$v$ are adjacent vertices. Note that if $u$ and $v$ are adjacent but
not twins, there exists a vertex $w$ which is adjacent to exactly one
vertex among $\{u,v\}$, i.e. $w\in N[u]\simdif N[v]$ (where $\Delta$
is the symmetric difference between sets). We say that $w$
\emph{distinguishes} $u$ and $v$, or simply $w$ \emph{distinguishes}
the edge $uv$. For a subset $X\subseteq V$, we say that a subset
$Y\subseteq V$ \emph{distinguishes} $X$ if for every pair $u,v$ of
non-twin vertices of $X$, there exists a vertex $w\in Y$ that
distinguishes the edge $uv$.
Let $c:V\to \mathbb N$ be a vertex-coloring of $G$. The coloring $c$
is \emph{proper} if adjacent vertices have distinct colors. We denote
by $\chi(G)$ the \emph{chromatic number of $G$}, i.e. the minimum
number of colors in a proper coloring of $G$. For any $X\subseteq V$,
let $c(X)$ be the set of colors that appear on the vertices of $X$. A
\emph{locally identifying coloring} (lid-coloring for short) of $G$ is
a proper vertex-coloring $c$ of $G$ such that for any two adjacent
vertices $u$ and $v$ that are not twins (i.e. $N[u]\ne N[v]$), we have
$c(N[u])\ne c(N[v])$. A graph $G$ is \emph{$k$-lid-colorable} if it
admits a locally identifying coloring using at most $k$ colors and the
minimum number of colors needed for any locally identifying coloring
of $G$ is the \emph{locally identifying chromatic number}
(lid-chromatic number for short) denoted by $\chi_{lid}(G)$. For a vertex
$u$, we say that $u$ \emph{sees} color $a$ if $a\in c(N[u])$.
For two adjacent vertices $u$ and $v$, a color that is in the set
$c(N[u])\simdif c(N[v])$ \emph{separates} $u$ and $v$, or simply
\emph{separates} the edge $uv$. The notion of chromatic number (resp.
lid-chromatic number) can be extended to a class of graphs $\clac$ as
follows: $\chi(\clac) = \sup\set{\chi(G), G\in \clac}$ (resp.
$\chi_{lid}(\clac) = \sup\set{\chi_{lid}(G), G\in \clac}$).
The following theorem is due to Bondy~\cite{B72}:
\begin{theorem}[Bondy's theorem \cite{B72}]
Let $\mathcal A=\{A_1,...,A_n\}$ be a collection of $n$ distinct
subsets of a finite set $X$. There exists a subset $X'$ of $X$ of
size at most $n-1$ such that the sets $A_i\cap X'$ are all distinct.
\end{theorem}
\begin{corollary}\label{cor:bondy}
Let $C$ be a $n$-clique subgraph of $G$. There exists a vertex
subset $S(C)\subseteq V(G)$ of size at most $n - 1$ that
distinguishes all the pair of non-twin vertices of $C$.
\end{corollary}
\begin{proof}
Let $C$ be a $n$-clique subgraph of $G$ induced by the vertex set
$V(C) = \set{v_1, v_2, \ldots,v_n}$. Let $\mathcal A=\set{N[v_i]
\mid v_i\in V(C)}$ be a collection of distinct subsets of the
finite set $X=\bigcup_{1\le i \le n}N[v_i]$. Note that some $v_i$'s
might be twins in $G$ (i.e. $N[v_i]=N[v_j]$ for some $v_i,v_j\in
V(C)$) and therefore $|\mathcal A|$ could be smaller than $n$. By
Bondy Theorem, there exists $S(C)\subseteq X$ of size at most
$|\mathcal A| - 1 \le n-1$ such that for any distinct elements
$A_1,A_2$ of $\mathcal A$, we have $A_1 \cap S(C) \neq A_2 \cap
S(C)$.
Let us prove that $S(C)$ is a set of vertices that distinguish all
the pairs of non-twin vertices of $C$. For a pair of non-twin vertices
$v_i,v_j$ of $C$, we have $N[v_i]\neq N[v_j]$. By definition of $S(C)$,
we have $N[v_i] \cap S(C) \neq N[v_j] \cap S(C)$, then there exists
$w\in S(C)$ that belongs to $N[v_i]\simdif N[v_j]$. Therefore, $w$
distinguishes the edge $v_iv_j$.
\end{proof}
\section{Bounded expansion classes of graphs}\label{sec:treedepth}
A rooted tree is a tree with a special vertex, called the \emph{root}.
The \emph{height} of a vertex $x$ in a rooted tree is the number of
vertices on a path from the root to $x$ (hence, the height of the root
is 1). The \emph{height} of a rooted tree $T$ is the maximum height of
the vertices of $T$. If $x$ and $y$ are two vertices of $T$, $x$ is an
\emph{ancestor} of $y$ in $T$ if $x$ belongs to the path between $y$
and the root. The \emph{closure} $\clos(T)$ of a rooted tree $T$ is
the graph with vertex set $V(T)$ and edge set $\{xy \ | \ x\text{ is
an ancestor of }y\text{ in }T, x\neq y\}$. The \emph{tree-depth}
$\td(G)$ of a connected graph $G$ is the minimum height of a rooted
tree $T$ such that $G$ is a subgraph of $\clos(T)$. If $G$ is not
connected, the tree-depth of $G$ is the maximum tree-depth of its
connected components.
Let $p$ be a fixed integer. A \emph{low tree-depth coloring} of a
graph $G$ (relatively to $p$) is a coloring of the vertices of $G$
such that the union of any $i\leq p$ color classes induces a graph of
tree-depth at most $i$. Let $\chi^{\td}_p(G)$ be the minimum number of
colors required in such a coloring. Note that as tree-depth one graphs and
tree-depth two graphs are respectively the stables and star forests,
$\chi^{\td}_1$ and $\chi^{\td}_2$ respectively correspond to the usual
chromatic number and the star chromatic number.
\medskip
In the following of this section, we first give a tight bound on the
lid-chromatic number in terms of tree-depth.
\begin{proposition}\label{prop:td}
For any graph $G$, $\chi_{lid}(G)\leq 2\td(G)-1$ and this is tight.
\end{proposition}
Using this bound, we then bound the lid-chromatic number
in terms of $\chi^{\td}_3$.
\begin{theorem}\label{thm:chlidch3}
For any graph $G$, $$\chi_{lid}(G)\leq 6^{\chi^{\td}_3(G) \choose 3}.$$
\end{theorem}
Classes of graphs of bounded expansion have been introduced by Ne\v
set\v ril and Ossona de Mendez~\cite{NO08}. These classes contain
minor closed classes of graphs and any class of graphs defined by an
excluded topological minor. Actually, these classes of graphs are
closely related to low tree-depth colorings:
\begin{theorem}[Theorem 7.1~\cite{NO08}]\label{thm:nesetril-pom}
A class of graphs $\clac$ has bounded
expansion if and only if $\chi^{\td}_p(\clac)$ is bounded for
any~$p$.
\end{theorem}
We therefore deduce the following corollary from
Theorems~\ref{thm:chlidch3} and~\ref{thm:nesetril-pom}:
\begin{corollary}\label{cor:bounded}
For any class $\clac$ of bounded expansion,
$\chi_{lid}(\clac)$ is bounded.
\end{corollary}
It is in particular true for a class of bounded tree-width. A consequence is that $\chi_{lid}$ is bounded for chordal graphs by a function of the clique number (which is equals to the tree-width plus 1 for a chordal graph). It is conjectured by Esperet {\em et al.} \cite{EGMOP12} that $\chi_{lid}(G)\leq 2\omega(G)$ if $G$ is chordal.
\medskip
We now prove Proposition~\ref{prop:td}.
\begin{proofof}{Proposition~\ref{prop:td}}
Let us first prove that the bound is tight. Consider the graph
$H_n$ obtained from a complete graph, with vertex set $\{a_1,\ldots,
a_n\}$, by adding a pendant vertex $b_i$ to every $a_i$ but one, say
for $1\le i< n$. The tree-depth of this graph is at least $n$ as it
contains a $n$-clique. Indeed, given a rooted tree $T$, two vertices
at the same height are non-adjacent in $clos(T)$, we thus need at
least $n$ levels. Actually the tree-depth of this graph is at most
$n$ since the tree $T$ rooted at $a_1$, and such that $a_i$ has two sons
$a_{i+1}$ and $b_i$, for $1\le i< n$, has height $n$ and is such that $clos(T)$
contains $H_n$ as a subgraph.
Let us show that in any lid-coloring of $H_n$ all the vertices must
have distinct colors, and thus use $2n-1 = 2 \td(H_n) - 1$
colors. Indeed, two vertices $a_i$ must have different colors as the
coloring is proper. A vertex $b_j$ cannot use the same color as a
vertex $a_i$, as otherwise the vertex $a_j$ would only see the $n$
colors used in the clique, just as $a_n$. Similarly if two vertices
$b_i$ and $b_j$ would use the same color, the vertices $a_i$ and
$a_j$ would see the same set of colors.
Let us now focus on the upper bound. We prove the result for a
connected graph and by induction on the tree-depth of $G$, denoted
by $k$. The result is clear for $k=1$ (the graph is a single
vertex).
Let $G$ be a graph of tree-depth $k>1$ and let $T$ be a rooted tree
of height $k$ such that $G$ is a subgraph of $\clos(T)$. If $T$ is
a path, the result is clear since there are only $k$ vertices. So
assume that $T$ is not a path, and let $r$ be the root of $T$. Let
$s$ be the smallest height such that there are at least two vertices
of height $s+1$. We name $r_i$, for $i\in\set{1,...,s}$, the unique
vertex of height $i$. Let $R=\{r_1,...,r_{s}\}$. Note that each of
the vertices of $R$ is adjacent to all the vertices of $\clos(T)$.
Therefore, we can choose the way we label the $s$ vertices in $R$
(i.e. we can choose the height of each of them in $T$) without
changing $\clos(T)$.
Necessarily, $G\setminus R$ has at least two connected components.
Let $G_1,\ldots,G_{\ell}$ be its connected components and thus
$\ell\ge 2$. We choose $T$ such that $s$ is minimal. It implies that
for each $i\in \{1,...,s\}$, $r_i$ has neighbors in all the
components $G_1$,...,$G_{\ell}$. Indeed, if it is not the case, by
permuting the elements of $R$ (this is possible by the above
remark), we can assume without loss of generality that $r_{s}$ does
not have a neighbor in $G_{\ell}$. Therefore, the set of edges
$e(r_{s},G_\ell) = \{r_{s}x : x\in V(G_\ell)\}$ of $\clos(T)$ are
not used by $G$. Then let $T'$ be the tree obtained from $T$ by
moving the whole component $G_{\ell}$ one level up in such a way
that the root of the subtree corresponding to $G_{\ell}$ is now the
son of $r_{s-1}$ (instead of $r_s$ previously). Note that
$\clos(T')$ is isomorphic to $\clos(T) \setminus e(r_{s},G_\ell)$
and thus $G$ is a subgraph of $\clos(T')$. This new tree $T'$ has
two vertices at height $s$, contradicting the minimality of $s$.
Any connected component $G_j$ has tree-depth at most $k'=k-s<k$.
By induction, for each $j\in \{1,...,\ell\}$, there exists a
lid-coloring $c_j$ of $G_j$ using colors in
$\set{1,\ldots,2k'-1}$. For each $c_j$, there is a minimum value
$s_j$ such that every vertex $r_i$ sees a color in
$\set{1,\ldots,s_j}$ in $G_j$. We choose a $(2k'-1)$-lid-coloring
$c_j$ of $G_j$ such that $s_j$ is minimized. Note that for each
color $a\le s_j$, there exists $r_i\in R$ such that $r_i$ sees color
$a$ in $G_j$ but no other color of $\set{1,\ldots,s_j}$. Otherwise,
after permuting colors $a$ and $s_j$, every vertex $r_i\in R$ would
see a color in $\set{1,\ldots,s_{j}-1}$, contradicting the
minimality of $s_j$. Assume without loss
of generality that $s_1 \ge s_2\ge \ldots \ge s_\ell$.
We replace in $c_1$ the colors $1,2,\ldots, s_1$ by
$1',2',\ldots,s'_1$. Note that now each vertex $r_i$ sees a color in
$\set{1',\ldots,s'_1}$ (in $G_1$) and a color in
$\set{1,\ldots,s_2}$ (in $G_2$). Furthermore, the other vertices of
$G$ (that is the vertices in $G_1,\ldots,G_\ell$) do not have this
property since $s_1\ge s_2$. Thus at this step every edge $xr_i$
with $x$ in some $G_j$ is separated.
Now we color each vertex $r_i$ with color $i^*$. Let $c : V(G) \to
\set{1^*,\ldots,s^*}\cup\set{1',\ldots,s'_1}\cup\set{1,\ldots,2k'-1}$
be the current coloring of $G$.
Note that now every distinguishable edge $xy$ in some $G_j$ is
separated. Indeed, either $xy$ was distinguished in $G_j$ and it has
been separated by $c_j$, or $xy$ is distinguished by some $r_i$ and
it is separated by the color $i^*$. Note also that $c$ is a proper coloring.
It remains to deal with the edges $r_ir_j$. For that purpose we will
refine some color classes. In the following lemma we show that such
refinements do not damage what we have done so far.
\begin{claim}
Consider a graph $G$ and a coloring $\varphi\ :\ V(G)
\longrightarrow \set{1,\ldots,k}$. Consider any refinement
$\varphi'$ of $\varphi$, obtained from $\varphi$ by recoloring
with color $k+1$ some vertices colored $i$, for some $i$. Any edge
$xy$ of $G$ properly colored (resp. separated) by $\varphi$ is
properly colored (resp. separated) by $\varphi'$.
\end{claim}
Indeed if $\varphi(x)\neq \varphi(y)$ then $\varphi'(x)\neq
\varphi'(y)$, and if $i\in \varphi(N[x])\simdif \varphi(N[y])$ then
$i$ or $k+1 \in \varphi'(N[x])\simdif \varphi'(N[y])$.
Let us define a relation $\mathcal R$ among vertices in $R$ by
$r_i\mathcal R r_j$ if and only if $c(N[r_i])=c(N[r_j])$. Let
$R_1$,...,$R_{\bar{s}}$ be the equivalence classes of the relation
$\mathcal R$ (note that each $R_i$ forms a clique since every
$r_i$ has distinct colors). We have $\bar{s}\geq s_1$. Indeed, by
definition of $s_1$ and the coloring $c_1$, for each color $a\in
\set{1',\ldots, s'_1}$, there exists $r_i\in R$ that sees $a$ in
$G_1$ but no other color of $\set{1',\ldots,s'_1}$. This vertex
$r_i$ belongs to some equivalence class $R_j$ and thus all the
vertices of $R_j$ sees color $a$ in $G_1$ but no other color of
$\set{1',\ldots,s'_1}$.
By Corollary~\ref{cor:bondy}, there is a
vertex set $S(R_i)$ of size at most $|R_i|-1$ which distinguishes
all pairs of non-twin vertices in $R_i$. We give to the vertices of
$S(R_i)$ new distinct colors. By the previous claim, this last operation
does not damage the coloring, and now all the distinguishable edges
are separated.
Since for this last operation we need $s-\bar{s}$ new colors, since
we used $2k'-1$ colors $\set{1,\ldots,2k'-1}$, $s_1$ colors
$\set{1',\ldots,s_1'}$ and $s$
colors $\set{1^*,\ldots,s^*}$, the total number of colors is $ (s-\bar{s}) +
(2k'-1) + s_1 + s = 2k -1 +s_1 -\bar{s} \le 2k-1$. This concludes
the proof of the theorem.
\end{proofof}
We are now ready to prove Theorem~\ref{thm:chlidch3}:
\begin{proofof}{Theorem~\ref{thm:chlidch3}}
Let $\alpha$ be a low tree-depth coloring of $G$ with parameter
$p=3$ and using $\chi^{\td}_3(G)$ colors. Let
$A=\{\alpha_1,\alpha_2,\alpha_3\}$ be a triplet of three distinct
colors and let $H_A$ be the subgraph of $G$ induced by the vertices
colored by a color of $A$. Since $H_A$ has tree-depth at most $3$, by
Proposition \ref{prop:td}, $H_A$ admits a lid-coloring $c_A$ with five colors
(says colors $1$ to $5$). We extend $c_A$ to the whole graph by
giving color $0$ to the vertices in $V(G)\setminus V(H_A)$.
Let $A_1, A_2, \ldots, A_k$ be the $k={\chi^{\td}_3(G) \choose 3}$
distinct triplets of colors.
We now construct a coloring $c$ of $G$ giving to each vertex $x$ of
$G$ the $k$-uplet $$(c_{A_1}(x),c_{A_2}(x),\ldots,c_{A_k}(x)).$$
The coloring $c$ is using $6^k$ colors.
Clearly it is a proper coloring: each pair of adjacent vertices will
be in some common graph $H_A$ and will receive distinct colors in
this graph. Let $x$ and $y$ be two adjacent vertices with $N[x]\neq
N[y]$. Let $w$ be a vertex adjacent to only one vertex among $x$ and
$y$. Let $A=\{\alpha(x), \alpha(y), \alpha(w)\}$. Vertices $x$ and
$y$ are not twins in the graph $H_A$. Hence $c_A(N[x])\neq
c_A(N[y])$ and therefore, $c(N[x])\neq c(N[y])$.
\end{proofof}
\section{Minor closed classes of graphs}\label{sec:minor}
Let $G$ and $H$ be two graphs. $H$ is a \emph{minor} of $G$ if $H$ can
be obtained from $G$ with successive edge deletions, vertex deletions
and edge contractions. A class $\clac$ is \emph{minor closed} if for
any graph $G$ of $\clac$, for any minor $H$ of $G$, we have $H\in
\clac$. The class $\clac$ is \emph{proper} if it is not the class of
all graphs. Let $H$ be a graph. A \emph{$H$-minor free graph} is a
graph that does not have $H$ as a minor. We denote by $\clak_n$ the
$K_n$-minor-free class of graphs. It is clear that any proper minor
closed class of graphs is included in the class $\clak_n$ for some
$n$. It is folklore that any proper minor closed class of graphs
$\clac$ has a bounded chromatic number $\chi(\clac)$.
The class of graphs of bounded expansion includes all the proper minor
closed classes of graphs. Thus, by Corollary~\ref{cor:bounded}, proper
minor closed classes have bounded lid-chromatic number. In this
section, we focus on these latter classes and give an alternative
upper bound on the lid-chromatic number. This gives us an explicit
upper bound for the lid-chromatic number of planar graphs.
Consider any proper minor closed class of graphs $\clac$. Since
$\clac$ is proper, there exists $n$ such that $\clac$ does not contain
$K_n$, that is $\clac\subseteq\clak_n$. Let $\cla{C}^N$ be the class of
graphs defined by $H\in \cla{C^N}$ if and only if there exists
$G\in\clac$ and $v\in G$ such that $H=G[N(v)]$. Note that $\cla{C^N}$
is a minor-closed class of graphs. Indeed, given any $H\in\cla{C^N}$,
let $G\in\clac$ and $v\in V(G)$ such that $H=G[N(v)]$. Let
$H'$ be any minor of $H$. Since $\clac$ is minor-closed and $H$ is a
subgraph of $G$, there exists a minor $G'$ of $G$ such that $H' =
G'[N(v)]$. Therefore, $H'$ belongs to $\cla{C^N}$.
We prove the following result on minor-closed classes of graphs:
\begin{theorem}\label{thm:induction}
Let $\clac$ be a proper minor closed class of graphs and let
$n\ge 3$ be such that $\clac \subseteq \clak_n$. Then
$$\chi_{lid}(\clac) \leq 4\cdot\chi_{lid}(\cla{C^N})\cdot\chi(\clac)^{n-3}$$
\end{theorem}
The class of trees is exactly the class $\clak_3$. Esperet et al.
\cite{EGMOP12} proved the following result.
\begin{proposition}[\cite{EGMOP12}]\label{prop:tree}
$\chi_{lid}(\clak_3)\leq 4$.
\end{proposition}
It is clear that $\clak_3^N$ is the class of stable graphs and therefore,
$\chi_{lid}(\clak_3^N) = 1$. Note that Theorem~\ref{thm:induction} implies
Proposition~\ref{prop:tree}.
Assume that $\chi_{lid}(\clak_{n-1})$ is bounded for some $n\ge 4$. It
is clear that $\clak_{n}^N = \clak_{n-1}$. Then, by
Theorem~\ref{thm:induction}, we have $\chi_{lid}(\clak_n) \leq
4\cdot\chi_{lid}(\clak_{n-1})\cdot\chi(\clak_n)^{n-3}$. Since
$\chi_{lid}(\clak_{n-1})$ and $\chi(\clak_n)$ are bounded,
$\chi_{lid}(\clak_n)$ is bounded.
Esperet et al. \cite{EGMOP12} also proved the following result.
\begin{proposition}[\cite{EGMOP12}]\label{prop:outer}
If $G$ is an outerplanar graph, $\chi_{lid}(G)\leq 20$.
\end{proposition}
We can then deduce from Theorem~\ref{thm:induction} and
Proposition~\ref{prop:outer} the following corollary:
\begin{corollary}
Let $\cla{P}$ be the class of planar graphs. Then
$\chi_{lid}(\cla{P})\leq 1280$.
\end{corollary}
\begin{proof}
Any graph $G\in\cla{P}$ is $\set{K_{3,3},K_5}$-minor free and thus
$\cla{P}$ is a proper minor closed class of graphs. Moreover, the
neighborhood of any vertex of $G\in\cla{P}$ is an outerplanar graph.
By Proposition \ref{prop:outer}, we have $\chi_{lid}(\cla{P}^N)\leq
20$. Furthermore, the Four-Color-Theorem gives $\chi(\cla{P}) = 4$.
By Theorem \ref{thm:induction}, $\chi_{lid}(\cla{P})\leq 4\times 20
\times 4^2=1280$.
\end{proof}
We finally give the proof of Theorem~\ref{thm:induction}.
\begin{proofof}{Theorem~\ref{thm:induction}}
Let $G\in \clac$ and let $u$ be a vertex of minimum degree. For any
$i$, define $\Vui$ as the set of vertices of $G$ at distance exactly
$i$ from $u$ and let $\Gui = G[\Vui]$. Let $s$ be the largest
distance from a vertex of $V$ to $u$. In other words, there are
$s+1$ nonempty sets $\Vui$ (note that $V_{\{u,0\}} = \set{u}$).
For any $i$, contracting in $G$ the subgraph
$G[\V{u}{0}\cup\V{u}{1}\cup\ldots\cup\V{u}{i-1}]$ in a single vertex
$x$ gives a graph $G'\in \clac$ such that $x$ is exactly adjacent to
every vertex of $\Gui$. Therefore, for any $i$, $\Gui\in\cla{C^N}$.
Hence, $\chi_{lid}(\Gui) \le \chi_{lid}(\cla{C^N})$ for any $i$.
Moreover, $\cla{C^N} \subseteq \clak_{n-1}$. Indeed, suppose that
there exists $H\in\cla{C^N}$ that admits $K_{n-1}$ as a minor.
Therefore there exists $G\in \clac$ such that $H=G[N(v)]$ for some
$v\in G$. Taking $v$ together with its neighborhood would give
$K_n$ as a minor, that contradicts the fact that
$\clac\subseteq\clak_n$. Hence, any $\Gui\in\clak_{n-1}$.
We construct a lid-coloring of $G$ using
$4\cdot\chi_{lid}(\cla{C^N})\cdot\chi(\clac)^{n-3}$ colors. This coloring is
constructed with three different colorings of the vertices of $G$:
$c_1$ which uses $4$ colors, $c_2$ which uses
$\chi_{lid}(\cla{C^N})$ colors and $c_3$ which is itself composed of
$n-3$ colorings with $\chi(\clac)$ colors. The final color $c(v)$ of
a vertex $v$ will be the triplet $(c_1(v),c_2(v),c_3(v))$. Hence
the coloring $c$ uses at most
$4\chi_{lid}(\cla{C^N})\chi(\clac)^{n-3}$ colors. The coloring $c_1$
is used to separate the pairs of vertices that lie in distinct sets
$\Vui$. The coloring $c_2$ separates the pairs of vertices that lie
in the same set $\Vui$ and are not twins in $\Gui$. Finally, the
coloring $c_3$ separates the pairs of vertices that lie in the same
set $\Vui$, that are twins in $\Gui$ but that are not twins in $G$.
The coloring $c_1$ is simply defined by $c_1(v)\equiv i\bmod 4$ if
$v\in \Vui$.
To define $c_2$, we define for each $i$, $0\leq i \leq s$, a
lid-coloring $c^i_{2}$ of $\Gui$ using colors $1$ to
$\chi_{lid}(\cla{C^N})$. Then $c_2$ is defined by $c_2(v)=c^i_{2}(v)$
if $v\in \Vui$.
We now define the coloring $c_3$. Let $\Vui^{id}$ be the set of
vertices of $\Vui$ that have a twin in $\Gui$:
$$\Vui^{id}=\{ v \in \Vui \ |\ \exists w \in \Vui, N_{\Gui}[v]= N_{\Gui}[w]\}.$$
Let $\Gui^{id} = \Gui[\Vuid]$. Since the relation ``be twin'' is
transitive (i.e. if $u$ and $v$ are twins, and $v$ and $w$ are twins,
then $u$ and $w$ are twins), then $\Gui^{id}$ is clearly a union of
cliques. In addition, since $\Gui\in\clak_{n-1}$, the
connected components of $\Gui^{id}$ are cliques of size at most $n-2$.
Let $C$ be a clique of $\Gui^{id}$.
By Corollary~\ref{cor:bondy}, there exists a subset $S(C)\subseteq V(G)$
of at most $n-3$ vertices that distinguishes all the pairs of non-twin
vertices of $C$. Note that by definition of $C$, $S(C)\cap \Vui = \emptyset$,
and thus $S(C) \subseteq V_{u,i-1}\cup V_{u,i+1}$.
Let $\mathcal S = \{(v,C) \mid v \in S(C) \text{ and $C$ is a clique in
a graph $\Gui^{id}$}\}$.
We partition $\mathcal S$ in $s\times(n-3)$ sets $S_i^{k}$, $1\leq i
\leq s$, $1\leq k \leq n-3$, such that:
\begin{itemize}
\item if $(v,C) \in S_i^k$ for some $k$, then $v\in \Vui$;
\item if $(v,C)$ and $(w,C')$ are two elements of $S_i^k$, then $C\neq C'$.
\end{itemize}
This partition can be done because each set $S(C)$ has size at most
$n-3$.
For each $S_i^k = \set{(x_1,C_1),(x_2,C_2),\ldots,(x_t,C_t)}$, we
define a graph $H_i^k$ as follows. We start from the graph induced by
$\Vui \cup V(C_1) \cup V(C_2) \cup \ldots \cup V(C_t)$. Then,
for each $(x_j,C_j)$ in $S_i^k$, we contract $C_j$ in a single vertex
$y_j$ and finally, we contract the edge $x_jy_j$ on the vertex $x_j$. Note that
$\Vui$ is the vertex set of $H_i^k$.
Note also that $H_i^k\in\clac$ since it is obtained from a subgraph of $G$
by successive edge-contractions. Therefore, $\chi(H^i_k) \leq
\chi(\clac)$.
We now define a proper coloring $c_3^{i,k}$ of $H_i^k$ with colors $1$
to $\chi(\clac)$. Let $c_3^k$ be the coloring of
vertices of $G$ defined by $c_3^k(v)=c_3^{i,k}(v)$ if $v\in \Vui$.
Finally, $c_3$ is defined by $c_3(v)=(c_3^1(v),\ldots,c_3^{n-3}(v))$, and
the final color of $v$ is $c(v)=(c_1(v),c_2(v),c_3(v))$.
We now prove that $c$ is a lid-coloring of $G$. First, $c$ is a proper
coloring. Indeed, two adjacent vertices that are not in the same set
$\Vui$ lie in consecutives set $\Vui$ and $\V{u}{i+1}$ and thus have
different colors in $c_1$, and two adjacent vertices in the same set
$\Vui$ have different colors in $c_2$ (which induces a proper coloring
on $\Vui$).
Let now $x$ and $y$ be two adjacent vertices with $N[x]\neq N[y]$. We
will prove that $c(N[x])\neq c(N[y])$. We distinguish three cases.
\begin{enumerate}[C{a}se 1:]
\item $x\in \Vui$ and $y\in \V{u}{i+1}$.
If $x=u$, then $y$ has a neighbor $v$ in $\V{u}{i+2}=\V{u}{2}$.
Indeed, $u$ is taken with minimum degree, so $y$ has at least as
many neighbors as $u$ and does not have the same neighborhood
than $u$, implying that $y$ has a neighbor in $\V{u}{2}$. Then
$c_1(v)=2\notin c_1(N[u])$ and so $c(N[x])\neq c(N[y])$.
Otherwise, $x$ has neighbor $v$ in $\V{u}{i-1}$ and $c_1(v)\equiv
i-1 \pmod 4 \in c_1(N[x])$. On the other hand, all the neighbors
of $y$ belong to $\Vui \cup \V{u}{i+1} \cup \V{u}{i+2}$ and
therefore $c_1(N[y]) \subseteq \{i,i+1,i+2 \pmod 4\}$. Thus,
$c(N[x])\neq c(N[y])$.
\item $x$ and $y$ belong to $\Vui$ and they are not twins in $\Vui$
(i.e. $N_{\Vui}[x] \neq N_{\Vui}[y]$).
By definition of the coloring $c_2^i$, there exists a color $a$ that
separates $x$ and $y$, i.e. $a \in c_2^i(N_{\Vui}[x])\simdif
c_2^i(N_{\Vui}[y])$. Then we necessarily have $c(N[x])\neq c(N[y])$.
\item $x$ and $y$ belong to $\Vui$ and they are twins in $\Vui$
(i.e. $N_{\Vui}[x] = N_{\Vui}[y]$).
In this case, vertices $x$ and $y$ are in the set $\Vui^{id}$. Let
$C$ be the clique of $\Gui$ containing $x$ and $y$. Let $v\in S(C)$
that distinguishes $x$ and $y$; thus, $v\in
\V{u}{j}$ for $j=i-1$ or $j=i+1$. Wlog, $v\in N[x]$ but $v\notin
N[y]$. Let $S_j^k$ be the part of $\mathcal S$ that contains
$(v,C)$. Suppose that there exists a neighbor $w$ of $y$ such that
$c(v)=c(w)$. Then $w$ lies in $\V{u}{j}$ because of the coloring
$c_1$. However, in the graph $H_j^k$, the vertex $v$ is adjacent to
all the neighbors of $y$ in $\V{u}{j}$, and in particular is
adjacent to $w$; therefore, $c_3^{j,k}(v) \neq c_3^{j,k}(w)$, a
contradiction. Therefore, the vertex $y$ does not have any neighbor
that has the same color as $v$. Hence, $c(v)\notin c(N[y])$, and
$c(N[x])\neq c(N[y])$.
\end{enumerate}
\end{proofof} | {"config": "arxiv", "file": "1212.5468.tex"} |
TITLE: How to change the order of integration of $\int_{-1}^1dx \int_{1-x^2}^{2-x^2}f(x,y)dy$
QUESTION [0 upvotes]: How to change the order of integration:
$$\int_{-1}^1dx \int_{1-x^2}^{2-x^2}f(x,y)dy$$
I tried to sketch the area and got:
Where red line is $x^2< 1$, green lines are $1-y$ and $2-y$. However I can't seem to get it right...
The solution should be:
REPLY [1 votes]: The easy way to do this kind of things is to rewrite integral boundaries into indicator functions:
$$I:=\int_{-1}^1dx\int_{1-x^2}^{2-x^2}dy\, f(x,y)=\int_{-1}^1dx\int_{0}^{2}dy\, f(x,y)\chi_{1-x^2<y<2-x^2}$$
where $\chi_\text{condition}$ is 1 if the condition is met and 0 otherwise.
Then you can switch easily
$$I=\int_{0}^{2}dy\int_{-1}^1dx\, f(x,y)\chi_{1-x^2<y<2-x^2}
=\int_{0}^{2}dy\int_{-1}^1dx\, f(x,y)\chi_{1-y<x^2<2-y}$$
Given that the lower bound $1-y$ for $x^2$ changes sign at $y=1$, it makes sense to cut the integral on $y$ at $y=1$:
$$I=\int_{0}^{1}dy\int_{-1}^1dx\, f(x,y)\chi_{1-y<x^2}+
\int_{1}^{2}dy\int_{-1}^1dx\, f(x,y)\chi_{x^2<2-y}$$
Notice how one could drop the condition $x^2<2-y$ in the first integral, and $1-y<x^2$ in the second, as these are now always true.
In the second integral, the condition is equivalent to $-\sqrt{2-y}<x<\sqrt{2-y}$, and these bounds are within $[-1,1]$, so things are easy: we can replace the indicator function with the new bounds. In the first integral the condition is equivalent to $x>\sqrt{1-y}$ or $x<-\sqrt{1-y}$. This splits the first integral into two parts:
$$I=\int_{0}^{1}dy\int_{-1}^{-\sqrt{1-y}}dx \,f(x,y)+\int_{0}^{1}dy\int_{\sqrt{1-y}}^1dx \,f(x,y)+\int_{1}^{2}dy\int_{-\sqrt{2-y}}^{\sqrt{2-y}}dx\, f(x,y).$$
This is the relation you give.
REPLY [1 votes]: Your diagram is not correct. The region is bound by curves $y = 1 - x^2, y = 2 - x^2, x = -1, x = 1$. The correct diagram is as given below -
The given solution has three split integrals - first integral corresponds to sub-region $1$, second to sub-region $2$ and third to sub-region $3$. | {"set_name": "stack_exchange", "score": 0, "question_id": 4379693} |
TITLE: Don't understand an induction proof for odometer principle
QUESTION [1 upvotes]: Prove by induction that the Odometer Principle with base b does indeed give the representation $$\text{$x_{n-1}...x_1x_0$ for the natural number $N = x_{n-1}b^{n-1}+...+x_1b+x_0$} $$.
So my question is, in the bracketed section of the image, how does one get from one line to the next?
REPLY [0 votes]: $...... + x_nb^m + (b-1)(b^{m-1} + b^{m-2} + ..... + 1) +1 =$
$...... + x_nb^m + [b(b^{m-1} + b^{m-2} + ..... + 1) - (b^{m-1} + b^{m-2} + ..... + 1)] + 1=$
$...... + x_nb^m + [(b^{m} + b^{m-1} + ..... + b) - (b^{m-1} + b^{m-2} + ..... + 1)] + 1=$
$...... + x_nb^m + [b^m + b^{m-1} - b^{m-1} + b^{m-2} - b^{m-2} + ... + b - b -1] + 1=$
$...... + x_nb^m + [b^m -1] + 1=$
$...... + x_nb^m + b^m=$
$...... + (x_n + 1)b^m $
=====
The text is assuming that you are familiar with $(b-1)(b^{m-1} + .... + 1) = b^m -1$ and that you can do $x_mb^m + (b-1)(b^{m-1} + .... + 1) +1 = x_mb^m + (b_m -1) + 1 = (x_m +1)b^m$ in your head. | {"set_name": "stack_exchange", "score": 1, "question_id": 3270190} |
\begin{document}
\title{DOA Estimation for Transmit Beamspace MIMO Radar via Tensor Decomposition with Vandermonde Factor Matrix}
\author{Feng Xu, \IEEEmembership{Student Member, IEEE}, Matthew W. Morency, \IEEEmembership{Student Member, IEEE}, and Sergiy A. Vorobyov, \IEEEmembership{Fellow, IEEE}
\thanks{This work was supported in part by the Academy of Finland under Grant 319822, in part by Huawei, and in part by the China Scholarship Council. This work was conducted while Feng Xu was a visiting doctoral student with the Department of Signal Processing and Acoustics, Aalto University. (\textit{Corresponding author: Sergiy A. Vorobyov.})}
\thanks{Feng Xu is with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China, and also with the Department of Signal Processing and Acoustics, Aalto University, Espoo 02150, Finland. (e-mail: [email protected], [email protected]).}
\thanks{Matthew W. Morency is with the Dept. Microelectronics, School of Electrical Engineering, Mathematics, and Computer Science, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands. (e-mail: [email protected]).}
\thanks{Sergiy A. Vorobyov is with the Department of Signal Processing and Acoustics, Aalto University, Espoo 02150, Finland. (e-mail: [email protected]).}
}
\maketitle
\begin{abstract}
We address the problem of tensor decomposition in application to direction-of-arrival (DOA) estimation for transmit beamspace (TB) multiple-input multiple-output (MIMO) radar. A general 4-order tensor model that enables computationally efficient DOA estimation is designed. Whereas other tensor decomposition-based methods treat all factor matrices as arbitrary, the essence of the proposed DOA estimation method is to fully exploit the Vandermonde structure of the factor matrices to take advantage of the shift-invariance between and within different subarrays. Specifically, the received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on the target Doppler shifts, the constructed tensor is reshaped into two distinct 3-order tensors. A computationally efficient tensor decomposition method is proposed to decompose the Vandermonde factor matrices. The generators of the Vandermonde factor matrices are computed to estimate the phase rotations between subarrays, which can be utilized as a look-up table for finding target DOA. It is further shown that our proposed method can be used in a more general scenario where the subarray structures can be arbitrary but identical. The proposed DOA estimation method requires no prior information about the tensor rank and is guaranteed to achieve precise decomposition result. Simulation results illustrate the performance improvement of the proposed DOA estimation method as compared to conventional DOA estimation techniques for TB MIMO Radar.
\end{abstract}
\begin{IEEEkeywords}
DOA estimation, Shift-invariance, TB MIMO radar, Tensor decomposition, Vandermonde factor matrix
\end{IEEEkeywords}
\section{Introduction}\label{1}
\IEEEPARstart{T}{he} development of multiple-input multiple-output (MIMO) radar has been the focus of intensive research \cite{6,5,8,7,9} over the last decade, and has opened new opportunities in target detection and parameter estimation. Many works have been reported in the literature showing the applications of MIMO radar with widely separated antennas \cite{6} or collocated antennas \cite{5}. Among these applications, direction-of-arrival (DOA) estimation \cite{8,12,15,16,18,19,26,21,23} is one of the most fundamental research topics. In this paper, we mainly focus on the DOA estimation problem for MIMO radar with collocated antennas.
By ensuring that the transmitted waveforms are orthogonal \cite{11}, MIMO radar enables increasing the system's degree of freedom (DoF), improving the spatial resolution and enhancing the parameter identifiability. The essence behinds these advantages is the construction of a virtual array (VA), which can be regarded as a new array with larger aperture and more elements \cite{7,9}. However, the omnidirectional transmit beampattern in MIMO radar, resulting from the orthogonal waveforms, deteriorates the parameter estimation performance since most of the emitted energy is wasted as compared to its phased-array counterpart. To tackle this problem, the transmit beamspace (TB) technique has been introduced \cite{8,12,13}. In TB MIMO radar, the transmitted energy can be focused on a fixed region \cite{8,12} by using a number of linear combinations of the transmitted waveforms via a TB matrix. This benefit becomes more evident when the number of elements in MIMO radar is large \cite{13}. Specifically, at some number of waveforms, the gain from using more waveforms begins to degrade the estimation performance. The trade-off between waveform diversity and spatial diversity implies that the performance of DOA estimation in TB MIMO radar can be further improved with a carefully designed TB matrix.
Meanwhile, many algorithms for DOA estimation in MIMO radar have been proposed. These algorithms can be summarized in two categories, signal covariance matrix-based algorithms \cite{15,16,18,8,12,19} and signal tensor decomposition-based algorithms \cite{21,26,23,27,29,35,40,49,52}. For example, the estimation of target spatial angles can be conducted by multiple signal classification (MUSIC). The generalization of MUSIC to a planar array requires a 2-dimension (2-D) spectrum searching \cite{15}, and thus suffers from high computational complexity. By exploiting the rotational invariance property (RIP) of the signal subspace, estimation of signal parameters via rotational invariance technique (ESPRIT) \cite{8,12,16} can be applied to estimate the target angles without a spectrum searching. The RIP can be enforced in many ways, e.g., uniformly spaced antennas \cite{16} and the design of TB matrix \cite{8,12}. To further reduce the computational complexity and increase the number of snapshots, unitary-ESPRIT (U-ESPRIT) has been proposed \cite{19}. Some algorithms like propagator method (PM) have been studied \cite{18} to avoid the singular value decomposition (SVD) of the signal covariance matrix. The aforementioned DOA estimation algorithms are mostly conducted on a per-pulse basis to update the result from pulse to pulse. They ignore the multi-linear structure of the received signal in MIMO radar and, therefore, lead to poor performance in low signal-to-noise ratio (SNR) region.
The second category, signal tensor decomposition-based algorithms, has been proposed to address the problem of poor performance in low SNR. In particular, a 3-order tensor is introduced to store the whole received signal for MIMO radar in a single coherent processing interval (CPI). Methods like high-order SVD (HOSVD) \cite{23,33} and parallel factor (PARAFAC) analysis \cite{21,26} can be applied to decompose the factor matrices. The DOA estimation can be conducted by exploiting the factor matrix with the target angular information. For example, the widely used alternating least square (ALS) algorithm is a common way of computing the approximate low-rank factors of a tensor. These factor matrices can be used to locate multiple targets simultaneously \cite{21,27}. Although the application of the conventional ALS algorithm improves the DOA estimation performance for MIMO radar, it usually requires the tensor rank as prior information, and the computational complexity can be extremely high as the convergence is unstable.
Nevertheless, conventional tensor decomposition methods are developed for tensors with arbitrary factor matrices. In array signal processing, special matrix structure like Toeplitz, Hankel, Vandermonde and columnwise orthonormal \cite{28,29} may exist in factor matrix when tensor model is applied to collect the received signal. The Vandermonde structure, as the most common one, can be generated from the application of carrier frequency offset, e.g., frequency diversity array (FDA) \cite{30} and orthogonal frequency-division multiplexing (OFDM) waveform \cite{55}, or uniformly spaced antennas, e.g., uniform linear array (ULA) and uniform rectangular array (URA). While conventional tensor decomposition methods are usually designed for tensors with arbitrary factor matrices, the decomposition of a tensor with structured factor matrices deserves further study as the structured factor matrix may point to a novel decomposition method and better uniqueness conditions. This is called {\it constrained tensor decomposition} \cite{28,29}. Moreover, transmit array interpolation is introduced for MIMO radar with arbitrary array structure \cite{23}. By solving the minimax optimization problem regarding interpolation matrix design, the original transmit array is mapped to a virtual array with desired structure. The DOA estimation bias caused by interpolation errors has also been analyzed in \cite{23}. However, the interpolation technique deteriorates the parameter identifiability, which makes it inappropriate for TB MIMO radar with arbitrary but identical subarrays.
In this paper, we consider the problem of tensor decomposition in application to DOA estimation for TB MIMO radar with multiple transmit subarrays.\footnote{Some preliminary ideas that have been extended and developed to this paper we published in \cite{31,53}.} A general 4-order tensor model that enables computationally efficient DOA estimation is designed. Whereas other tensor decomposition-based methods treat all factor matrices as arbitrary, the proposed DOA estimation method fully exploits the Vandermonde structure of the factor matrix to take advantage of the shift-invariance between and within different subarrays. In particular, the received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on the target Doppler shifts, the constructed tensor is reshaped into two distinct 3-order tensors. A computationally efficient tensor decomposition method, which can be conducted via linear algebra with no iterations, is proposed to decompose the factor matrices of the reshaped tensors. Then, the Vandermonde structure of the factor matrices is utilized to estimate the phase rotations between transmit subarrays, which can be applied as a look-up table for finding target DOA. It is further shown that our proposed method can be used in a more general scenario where the subarray configurations are arbitrary but identical. By exploiting the shift-invariance, the proposed method improves the DOA estimation performance over conventional methods, and it has no requirement of prior information about the tensor rank. Simulation results verify that the proposed DOA estimation method has better accuracy and higher resolution.
The rest of this paper is organized as follows. Some algebra preliminaries about tensors and matrices are introduced at the end of Section~\ref{1}. A 4-order tensor model for TB MIMO radar with uniformly spaced subarrays is designed in Section~\ref{sig}. In Section~\ref{3}, the proposed tensor model is reshaped properly to achieve the uniqueness condition of tensor decomposition. The DOA estimation is conducted by exploiting the shift-invariance between and within different subarrays. The parameter identifiability is also analysed. Section~\ref{4} generalizes the proposed DOA estimation method to TB MIMO radar with non-uniformly spaced subarrays, where multiple scales of shift-invariances can be found. Section~\ref{5} performs the simulation examples while the conclusions are drawn in Section~\ref{6}.
\textsl{Notation}: Scalars, vectors, matrices and tensors are denoted by lower-case, boldface lower-case, boldface uppercase, and calligraphic letters, e.g., $y$, $\bf y$, $\bf Y$, and $\cal Y$, respectively. The transposition, Hermitian transposition, inversion, pseudo-inversion, Hadamard product, outer product, Kronecker product and Khatri-Rao (KR) product operations are denoted by ${\left( \cdot \right)^T},{\left( \cdot \right)^H},{\left( \cdot \right)^{ - 1}}, {\left( \cdot \right)^{\dag}}, * , \circ ,\otimes$, and $\odot$, respectively, while $vec\left( \cdot \right)$ stands for the operator which stacks the elements of a matrix/tensor one by one to a column vector. The notation $diag({\bf{y}})$ represents a diagonal matrix with its elements being the elements of ${\bf{y}}$, while $\left\| {\bf{Y}} \right\|_F$ and $\left\| {\bf{Y}} \right\|$ are the Frobenius norm and Euclidean norm of ${\bf{Y}}$, respectively. Moreover, ${{\bf{1}}_{M\times N}}$ and ${{\bf{0}}_{M\times N}}$ denote an all-one matrix of dimension $M \times N$ and an all-zero matrix of size $M \times N$, respectively, and ${{\bf{I}}_{M}}$ stands for the identity matrix of size $M \times M$. For ${\bf B} \in {{\mathbb{C}}^{M \times N}}$, the $n$-th column vector and $(m,n)$-th element are denoted by ${\bf b}_n$ and $B_{mn}$, respectively, while the $m$-th element of ${\bf b} \in {{\mathbb{C}^{M \times 1}}}$ is given by $b(m)$. The estimates of $\bf B$ and $\bf b$ are given by $\bf \hat B$ and $\bf \hat b$, while the rank and Kruskal-rank of ${\bf B}$ are denoted by $r({\bf B})$ and $k_{\bf B}$, respectively. To express two submatrices of $\bf B$ without the first and last row vector, ${\bf \underline B}$ and ${\bf \overline B}$ are applied. If the general form, $\bf B$ can be written as ${\bf B} \triangleq [{\bm \beta}_1,{\bm \beta}_2,\cdots,{\bm \beta}_N]$, where ${\bm \beta}_n \triangleq [1,z_n,z_n^2,\cdots,z_n^{M-1}]^T$, i.e., $\bf B$ is a Vandermonde matrix, and ${\bf z} \triangleq [z_1,z_2, \cdots, z_N]^T \in {{\mathbb{C}}^{N \times 1}} $ is the vector of generators. When each element is unique, ${\bf z}$ is considered to be distinct.
\subsection{Algebra Preliminaries for Tensors and Matrices}\label{2}
For an $N$-th order tensor ${{\cal Y} \in {{\mathbb{C}}^{{I_1} \times {I_2}\times \cdots \times {I_N}}}}$, the following facts are introduced \cite{27,34}.
\begin{fact}(PARAFAC decomposition):
The PARAFAC decomposition of an $N$-th order tensor is a linear combination of the minimum number of rank-one tensors, given by
\begin{equation}
\begin{aligned}
{\cal Y} = \sum\limits_{l = 1}^L {{{\bm{\alpha}}_l^{(1)}} \circ {{\bm{\alpha}}_l^{(2)}} \circ \cdots \circ {{\bm{\alpha}}_l^{(N)}}}\triangleq [[{\bf A}^{(1)}, {\bf A}^{(2)},\cdots, {\bf A}^{(N)}]]
\end{aligned}
\end{equation}\label{PARAFAC}
\end{fact}
where ${{\bm{\alpha}}_l^{(n)}}$ is the $l$-th column of ${\bf{A}}^{(n)}$ with ${\bf{A}}^{(n)}$ being the $n$-th factor matrix of size $I_n \times L$, and $L$ is the tensor rank.
\begin{fact} (Uniqueness of PARAFAC decomposition):\label{krank}
The PARAFAC decomposition is unique if all potential factor matrices satisfying \eqref{PARAFAC} also match with
\begin{equation}
{{\bf{\tilde A}}^{(n)}} = {{\bf{A}}^{(n)}}{{\bf{\Pi }}^{(n)}}{{\bf{\Delta }}^{(n)}}
\end{equation}
where ${{\bf{\Pi }}^{(n)}}$ is a permutation matrix and ${{\bf{\Delta }}^{(n)}}$ is a diagonal matrix. The product of ${{\bf{\Delta }}^{(n)}},n = 1,2,\cdots,N$ is an $L \times L$ identity matrix. Usually, the generic uniqueness condition is given by \cite{34}: $\sum\limits_{n = 1}^N {{k_{{{\bf{A}}^{(n)}}}}} \ge 2L + (N - 1)$.
\end{fact}
\begin{fact}(Mode-$n$ unfolding of tensor):\label{tensordef}
The mode-$n$ unfolding of a tensor ${{\cal Y} \in {{\mathbb{C}}^{{I_1} \times {I_2}\times \cdots \times {I_N}}}}$ is denoted by ${\bf Y}_{(n)}$, which is a matrix of size ${{I_1}\cdots{I_{n-1}}{I_{n+1}}\cdots{I_N}}\times {I_n}$
\begin{equation}
{\bf Y}_{(n)} = \left( {{{\bf{A}}^{(1)}} \cdots \odot {{\bf{A}}^{(n - 1)}} \odot {{\bf{A}}^{(n + 1)}} \cdots \odot{{\bf{A}}^{(N)}}} \right)\left({{{{\bf{A}}^{(n)}}}}\right)^T.
\end{equation}
\end{fact}
\begin{fact}\label{reshape} (Tensor reshape):
The reshape operator for an $N$-th order tensor ${{\cal Y} \in {{\mathbb{C}}^{{I_1} \times {I_2}\times \cdots \times {I_N}}}}$ returns a new $M$-th order tensor ${{\cal X} \in {{\mathbb{C}}^{{J_1} \times {J_2}\times \cdots \times {J_M}}}}$ ($M \le N$) with $\prod\limits_{n = 1}^N {{I_n}} = \prod\limits_{m = 1}^M {{J_m}}$ and $vec({\cal Y}) = vec({\cal X})$, e.g., if ${J_m} = {I_m}, m = 1,2,\cdots,M-1$ and ${J_M} = \prod\limits_{n = M}^N {{I_n}}$, the mode-$M$ unfolding of reshaped $\cal X$ is
\begin{equation}
{{\bf{X}}_{(m)}} = \left( {{{\bf{A}}^{(1)}} \odot \cdots \odot {{\bf{A}}^{(M-1)}}} \right){\left( {{{\bf{A}}^{(M)}} \odot \cdots \odot {{\bf{A}}^{(N)}}} \right)^T}.
\end{equation}
\end{fact}
\begin{lemma}:\label{111}
For a 3-order tensor ${\cal Y} \triangleq [[{\bf A}^{(1)},{\bf A}^{(2)},{\bf A}^{(3)}]]$, where ${\bf A}^{(1)}$ is a Vandermonde matrix or the KR product of two Vandermonde matrices. The decomposition of $\cal Y$ is generically unique if the generators of ${\bf A}^{(1)}$ are distinct and ${\bf A}^{(3)}$ is column full rank.
\begin{proof}
It is purely technical and is given in supplemental material as Appendix~\ref{222}.
\end{proof}
\end{lemma}
\begin{lemma}: The following equalities hold true
\begin{equation}
\begin{aligned}
&{\bf A}{\bf B} = {\bf A} \odot {\bf b}^T = {\bf b}^T \odot {\bf A}\\
& {\bf A} \odot {\bf b}^T\odot {\bf C} = {\bf b}^T \odot {\bf A}\odot {\bf C} = {\bf A} \odot {\bf C}\odot {\bf b}^T\\
& ({\bf A} \odot {\bf B})\odot {\bf C} = {\bf A} \odot ({\bf B}\odot {\bf C})\\
& vec\left( {{\bf{ABD}}} \right) = \left( {{{\bf{D}}^T} \odot {\bf{A}}} \right){\bf{b}}\\
& \left( {{\bf{A}} \otimes {\bf{C}}} \right)\left( {{\bf{D}} \otimes {\bf{E}}} \right) = \left( {{\bf{AD}}} \right) \otimes \left( {{\bf{CE}}} \right)
\end{aligned}
\end{equation}
where ${\bf{A}} \in {\mathbb{C}^{M \times N}}$, ${\bf{C}} \in {\mathbb{C}^{Q \times N}}$, ${\bf{D}} \in {\mathbb{C}^{N \times P}}$, ${\bf{E}} \in {\mathbb{C}^{N \times L}}$ and ${\bf{B}} = diag({\bf{b}}) \in {\mathbb{C}^{N \times N}}$.
\end{lemma}
\section{TB MIMO Radar Tensor model}\label{sig}
\subsection{TB MIMO Radar with Linear Array}
Consider a collocated MIMO radar with $M$ transmit elements and $N$ receive elements. The transmit array is a ULA with its elements spaced at half the working wavelength away from each other. The receive elements are randomly placed within a fixed aperture. Assuming $S$ subarrays are uniformly spaced at the transmit side and also assuming that each subarray contains $M_0$ elements, the indices of first elements in those subarrays are denoted by $m_s,s = 1,2,\cdots,S$. Without loss of generality, $m_s$ rises uniformly. The steering vectors of the entire transmit array and the first transmit subarray at direction $\theta$ can be given by ${\bm{\alpha }}(\theta ) \triangleq {\left[ {1,{e^{ - j\pi\sin \theta }},\cdots,{e^{ - j(M-1)\pi\sin \theta }}} \right]^T}$ and ${{\bm{\alpha }}_0}(\theta) \triangleq {\left[ {1,{e^{ - j\pi\sin \theta }},\cdots,{e^{ - j(M_0-1)\pi\sin \theta }}} \right]^T}$, respectively. The steering vector of the receive array can be written as ${\bm{\beta }}(\theta ) \triangleq {\left[ 1,{{e^{ - j\frac{{2\pi }}{\lambda }{x_2}\sin \theta}},\cdots,{e^{ - j\frac{{2\pi }}{\lambda }{x_{N}}\sin \theta }}} \right]^T}$, where $\left\{ {\left. x_{n} \right|{\rm{0}} < {x_{n}} \le {D}, n = 1,\cdots,N} \right\}$ and $D$ is the aperture of the receive array.
Accordingly, the transmit and receive steering matrices for $L$ targets in $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$ can be denoted by ${\bf{A}} \triangleq \left[ {{\bm{\alpha }}({\theta _1}),{\bm{\alpha }}({\theta _2}),\cdots,{\bm{\alpha }}({\theta _L})} \right]$ and ${\bf{B}} \triangleq \left[ {{\bm{\beta }}({\theta _1}),{\bm{\beta }}({\theta _2}),\cdots,{\bm{\beta }}({\theta _L})} \right]$, respectively, while the steering matrix for the first transmit subarray can be given as ${\bf{A}}_0 \triangleq \left[ {{\bm{\alpha }}_0({\theta _1}),{\bm{\alpha }}_0({\theta _2}),\cdots,{\bm{\alpha }}_0({\theta _L})} \right]$. Note that ${\bf{A}}_0$ can also be regarded as the submatrix of ${\bf{A}}$ with first $M_0$ rows.
In conventional MIMO radar, the received signal at the output of the receive array after matched-filtering in matrix form can be modelled as \cite{21}: ${\bf{Y}} = {\bf{B\Sigma}}{\bf A}^T + {\bf N}$, where ${\bf {\Sigma}} = diag({\bm \sigma})$, ${\bm{\sigma}} \triangleq \left[ {\sigma _1^2,\sigma _2^2,\cdots,\sigma _L^2} \right]^T$ represents the vector of target radar cross section (RCS) fading coefficients obeying Swerling I model, and ${\bf N}$ is the noise residue of size $N\times M$. When the TB technique is introduced \cite{8,12}, the received signal model after matched-filtering of $K$ orthogonal waveforms ($K \le M$) can be generalized as ${\bf{Y}} = {\bf{B\Sigma}}({{\bf W}^ H\bf A})^T + {\bf N}$, where ${\bf{W}} \triangleq {\left[ {{{\bf{w}}_1},{{\bf{w}}_2},\cdots,{{\bf{w}}_K}} \right]_{M \times K}}$ denotes the TB matrix.
Hence, the received signal for the $s$-th transmit subarray, $s = 1,2,\cdots, S$, and the whole receive array can be written as
\begin{equation}
{\bf{Y}}_s = {\bf{B\Sigma}}({{\bf W}_s^ H {\bf A}_s})^T + {\bf N}_s \label{matrix}
\end{equation}
where ${\bf W}_s$ and ${\bf A}_s$ represent the TB matrix and steering matrix for the $s$-th transmit subarray, respectively, and ${\bf N}_s$ is the noise residue of size $N\times {M_0}$. Assume that the TB matrix for each subarray is identical, denoted by ${\bf W}_0 \in {\mathbb{C}^{{M_0} \times K}}$. Note that since $m_s$ rises uniformly, the steering matrix for the $s$-th transmit subarray can also be expressed as ${\bf A}_s = {\bf A}_0{\bf \Gamma}_s$, where ${\bf \Gamma}_s = diag({{\bf{k}}_s})$ and $ {{\bf{k}}_s} \triangleq \left[ {{e^{ - j\pi ({m_s} - 1)\sin {\theta _1}}},\cdots,{e^{ - j\pi ({m_s} - 1)\sin {\theta _L}}}} \right]^T$. Substituting this relationship into \eqref{matrix} and vectorizing it, we have ${{\bf{y}}_{s}^{}} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf \Gamma}_s{{\bm \sigma}} + {{\bf{n}}_{s}^{}}$, where ${\bf n}_s$ is the vectorized noise residue.
Considering the Doppler effect, the received signal during $q$-th pulse in a single CPI, $q = 1,2,\cdots,Q$, can be written as
\begin{equation}
{{\bf{y}}_{s}^{(q)}} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf \Gamma}_s{{\bf c}_q} + {{\bf{n}}_{s}^{(q)}}\label{vector_subarray}
\end{equation}
where ${{\bf{c}}_{q}} = {\bm{\sigma}} * {{\bf{\bar c}}_{{q}}}$, ${{\bf{\bar c}}_{{q}}} \triangleq \left[ {{e^{i2\pi {f_1}{q}T}},{e^{i2\pi {f_2}{q}T}},\cdots,{e^{i2\pi {f_L}{q}T}}} \right]^T$, $f_l$ denotes the Doppler shift, $T$ is the radar pulse duration, and ${{\bf{n}}_{s}^{(q)}}$ is the vectorized noise residue. Concatenate the received signal of $S$ subarrays in $q$-th pulse, i.e., ${{\bf{Y}}^{(q)}} \triangleq \left[ {{{\bf{y}}_{1}^{(q)}},{{\bf{y}}_{2}^{(q)}},\cdots,{{\bf{y}}_{S}^{(q)}}} \right]_{KN \times S}$. The compact form can be written as
\begin{equation}
{{\bf{Y}}^{(q)}} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\left[ {{{\bf{c}}_q^T} \odot {\bf{K}}} \right]^T} + {{\bf{N}}^{(q)}}\label{Subarray}
\end{equation}
where ${\bf{K}} \triangleq {\left[ {{\bf{k}}_1,{\bf{k}}_2,\cdots,{\bf{k}}_S} \right]^T}_{S \times L}$ and ${{\bf{N}}^{(q)}} \triangleq \left[ {{{\bf{n}}_{1}^{(q)}},{{\bf{n}}_{2}^{(q)}},\cdots,{{\bf{n}}_{S}^{(q)}}} \right]$. Note that the $l$-th column of $\bf K$ represents the phase rotations of $l$-th target for $S$ subarrays. $\bf K$ can be named as {\it transmit subarray steering matrix}.
Vectorizing \eqref{Subarray}, the $KNS \times 1$ vector can be given as
\begin{equation}
\begin{aligned}
{{\bf{z}}_q} & = \left\{ {\left[ {{{\bf{c}}_q^T} \odot {\bf{K}}} \right] \odot \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]} \right\}{{\bf{1}}_{L \times 1}}+{\bf r}_q\\
& = \left[ {{\bf{K}} \odot \left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf{c}}_q + {{\bf{r}}_q}
\end{aligned}\label{ULAz}
\end{equation}
where ${{\bf{r}}_q}$ is the vectorized noise residue of ${{\bf{N}}^{(q)}}$. Then, concatenate the received signal of $Q$ pulses, i.e., ${{\bf{Z}}} \triangleq \left[ {{{\bf{z}}_{1}},{{\bf{z}}_{2}},\cdots,{{\bf{z}}_{Q}}} \right]_{KNS \times Q}$. The compact form can be formulated as
\begin{equation}
{{\bf{Z}}} = \left[ {{\bf{K}} \odot \left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{{\bf{C}}^T} + {{\bf{R}}}\label{unfolding}
\end{equation}
where ${{\bf{C}}} \triangleq {\left[ {{\bf{c}}_1,{\bf{c}}_2,\cdots,{\bf{c}}_{Q}} \right]^T}_{Q \times L}$ and ${{\bf{R}}} \triangleq \left[ {{{\bf{r}}_{1}},{{\bf{r}}_{2}},\cdots,{{\bf{r}}_{Q}}} \right]$. Similarly, $\bf C$ can be named as {\it Doppler steering matrix} since each column denotes the Doppler steering vector for one target (with additional RCS information). According to \textbf{Fact~\ref{tensordef}}, a 4-order tensor ${\cal Z} \in {\mathbb{C}^{{S} \times K \times N\times{Q}}}$ whose matricized version is ${{\bf{Z}}}$ in \eqref{unfolding} can be constructed. Denote ${\bf X} \triangleq {{\bf{W}}_0^H{{\bf{A}}_0}}$, then this tensor can be written as
\begin{equation}
{\cal Z} = \sum\limits_{l = 1}^L {{{\bm{\kappa }}_l} \circ {{\bm{\chi}}_l} \circ {{\bm{\beta }}_l} \circ {{\bm{\gamma }}_l}}+{\cal R} \triangleq [[ {{\bf{K}},{\bf{X}},{\bf{B}},{\bf{C}}}
]] +{\cal R}\label{tensor1}
\end{equation}
where ${{{\bm{\kappa }}_l}, {{\bm{\chi}}_l}, {{\bm{\beta }}_l}, {{\bm{\gamma }}_l}}$ are the $l$-th columns of ${\bf{K}},{\bf{X}},{\bf{B}},{\bf{C}}$, respectively, $L$ is the tensor rank, and $\cal R$ is the noise tensor of the same size.
\subsection{TB MIMO Radar with Planar Array}
\begin{figure}
\centerline{\includegraphics[width=0.8 \columnwidth]{SystemModel.pdf}}
\caption{Transmit array configuration for TB MIMO radar with planar array.}\label{sys}
\end{figure}
Consider the planar array case, as shown in Fig.~\ref{sys}. A URA with $M = {M_x}\cdot{M_y}$ elements spaced at half the working wavelength in both directions and a planar array with $N$ randomly spaced elements are applied as the transmit and receive array, respectively. The transmit steering vector can be given as ${\bm{\alpha }}(\theta ,\varphi ) = {\bf{u}}(\theta ,\varphi ) \otimes {\bf{v}}(\theta ,\varphi )$, where ${\bf{u}}(\theta ,\varphi ) \triangleq {\left[ {1,{e^{ - j\pi u }},\cdots,{e^{ - j({M_y} - 1)\pi u}}} \right]^T}$, ${\bf{v}}(\theta ,\varphi ) \triangleq {\left[ {1,{e^{ - j\pi v}},\cdots,{e^{ - j({M_x} - 1)\pi v }}} \right]^T}$, $u \triangleq \sin \varphi \sin \theta$, $v \triangleq \sin \varphi \cos \theta $, and $(\theta ,\varphi )$ is the pair of azimuth and elevation of a target. The steering vector of the receive array can be written as ${\bm{\beta }}(\theta, \varphi ) \triangleq {\left[ {1,{e^{ - j\frac{{2\pi }}{\lambda }({x_{2}}v + {y_{2}}u)}}, \cdots,{e^{ - j\frac{{2\pi }}{\lambda }({x_{N}}v + {y_{N}}u)}}} \right]^T}$, where $\left\{ {\left. {({x_{n }},{y_{n }})} \right|{\rm{0}} < {x_{n}} \le {D_x},{\rm{0}} < {y_{n}} \le {D_y}} \right\}$ are the coordinates of the receive elements, and $D_x, D_y$ denote the apertures in two directions, respectively.
Accordingly, assume $S = I\cdot J$ transmit subarrays are uniformly spaced at the transmit side, which can be overlapped or not. Each of them contains $M_0 = {M_{x_0}}\cdot {M_{y_0}}$ elements. The first subarray is selected as the reference subarray. For $L$ targets in $\left\{\left( {{\theta _{l}}}, {\varphi_l} \right) \right\}_{l = 1}^L$, the transmit and receive steering matrices can be generalized as ${\bf{A}} \triangleq \left[ {{\bm{\alpha }}({\theta _1},{\varphi_1}),{\bm{\alpha }}({\theta _2},{\varphi_2}),\cdots,{\bm{\alpha }}({\theta _L},{\varphi_L})} \right]$ and ${\bf{B}} \triangleq \left[ {{\bm{\beta }}({\theta _1},{\varphi_1}),{\bm{\beta }}({\theta _2},{\varphi_2}),\cdots,{\bm{\beta }}({\theta _L},{\varphi_L})} \right]$, respectively. Note that the transmit array is a URA, thus, we have ${\bf{A}} = {{\bf{U}}} \odot {{\bf{V}}}$, where ${{\bf{U}}} \triangleq \left[ {{\bf{u}}({\theta _1},{\varphi _1}),{\bf{u}}({\theta _2},{\varphi _2}),\cdots,{\bf{u}}({\theta _L},{\varphi _L})} \right]$ and ${{\bf{V}}} \triangleq \left[ {{\bf{v}}({\theta _1},{\varphi _1}),{\bf{v}}({\theta _2},{\varphi _2}),\cdots,{\bf{v}}({\theta _L},{\varphi _L})} \right]$. Similarly, the steering vector of the reference transmit subarray can be written as ${\bm{\alpha }}_0(\theta ,\varphi ) = {\bf{u}}_0(\theta ,\varphi ) \otimes {\bf{v}}_0(\theta ,\varphi )$, where ${\bf{u}}_0(\theta ,\varphi )$ and ${\bf{v}}_0(\theta ,\varphi )$ contain the first $M_{y_0}$ and $M_{x_0}$ elements in ${\bf{u}}(\theta ,\varphi )$ and ${\bf{v}}(\theta ,\varphi )$, respectively. The steering matrix for the reference transmit subarray can be denoted by ${\bf{A}}_0 = {{\bf{U}}_{0}} \odot {{\bf{V}}_{0}}$, where ${{\bf{U}}_{0}}$ and ${{\bf{V}}_{0}}$ are the submatrices of ${{\bf{U}}}$ and ${{\bf{V}}}$ that consist of the first $M_{y_0}$ and $M_{x_0}$ rows, respectively.
For the $(i,j)$-th subarray (or equivalently, for the $s$-th transmit subarray where $s = (j-1)I+i$), the index of first element is denoted by $(m_{i},m_{j}),\ i = 1,2,\cdots,I,\ {j} = 1,2,\cdots, J$. Both $m_{i}$ and $m_{j}$ rise uniformly. The steering matrix for the $(i,j)$-th subarray can be given as ${\bf{A}}_{ij} = {{\bf{U}}_{j}} \odot {{\bf{V}}_{i}}$, where ${{\bf{U}}_{j}} = {{\bf{U}}_{0}}{{\bf \Gamma}_{j}}$, ${{\bf{V}}_{i}} = {{\bf{V}}_{0}}{{\bf \Gamma}_{i}}$, ${{\bf \Gamma}_{j}} = diag({\bf h}_{j}),{{\bf \Gamma}_{i}} = diag({\bf d}_{i})$, vectors ${\bf h}_{j} \triangleq {\left[{{e^{ - j\pi{(m_{j}-1)}u_1 }},\cdots,{e^{ - j\pi{(m_{j}-1)}u_L }}} \right]^T}$ and ${\bf d}_{i} \triangleq {\left[ {{e^{ - j\pi{(m_{i}-1)}v_1 }},\cdots,{e^{ - j\pi{(m_{i}-1)}v_L }}} \right]^T}$ indicate the phase rotations for $L$ targets in two directions, respectively.
Generalizing \eqref{vector_subarray}, the received signal after matched-filtering for the $s$-th transmit subarray and the whole receive array in $q$-th pulse can be written as
\begin{equation}
{{\bf{y}}^{(q)}_{s}} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{{\bf \Gamma}_{i}}{{\bf \Gamma}_{j}}{{\bf c}_q} + {{\bf{n}}^{(q)}_{s}}.\label{z}
\end{equation}
Similarly, the concatenation of the received signal ${{\bf{y}}^{(q)}_{s}}$ for all $S$ subarrays in $q$-th pulse can be expressed as
\begin{equation}
{\bf Y}^{(q)} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\left( {{{\bf{c}}_q^T} \odot {\bf{H }} \odot {\bf{\Delta}}} \right)^T} + {\bf N}^{(q)}\label{2Dunfolding}
\end{equation}
where ${\bf{H}} \triangleq {\left[ {{\bf{h}}_1,{\bf{h}}_2,\cdots,{\bf{h}}_{J}} \right]^T}_{{J} \times L}$ and ${\bf{\Delta}}\triangleq {\left[ {{\bf{d}}_1,{\bf{d}}_2,\cdots,{\bf{d}}_{I}} \right]^T}_{{I} \times L}$. Proof of \eqref{2Dunfolding} is purely technical and is given in supplemental material as Appendix~\ref{A}. Then ${{\bf{z}}_q} = vec({\bf Y}^{(q)})$ can be formulated as
\begin{equation}
{{\bf{z}}_q} = \left[ {{\bf{H}} \odot {\bf \Delta} \odot \left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf{c}}_q + {{\bf{r}}_q}. \label{yq}
\end{equation}
After concatenating ${{\bf{z}}_q}$ in the same way as \eqref{unfolding}, the received signal of $Q$ pulses in the URA case can be written as
\begin{equation}
{{\bf{Z}}} = \left[ {{\bf{H}}\odot {\bf{\Delta}} \odot \left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{{\bf{C}}^T} + {{\bf{R}}}.\label{unfolding2}
\end{equation}
It is interesting that, \eqref{unfolding2} can be directly obtained from \eqref{unfolding} by replacing $\bf K$ with ${\bf H} \odot {\bf \Delta}$. Hence, ${\bf H} \odot {\bf \Delta}$ can be regarded as the {\it transmit subarray steering matrix} for URA. Using \textbf{Fact~\ref{tensordef}}, a 5-order tensor $\cal Z$ whose matricized version is ${{\bf{Z}}}$ in \eqref{unfolding2} can be constructed as
\begin{equation}
{\cal Z} = \sum\limits_{l = 1}^L {{{\bm{\eta}}_l} \circ{{\bm{\delta }}_l} \circ {{\bm{\chi}}_l} \circ {{\bm{\beta }}_l} \circ {{\bm{\gamma }}_l}}+{\cal R}\triangleq [[{{\bf{H}},{\bf{\Delta}},{\bf{X}},{\bf{B}},{\bf{C}}}
]] +{\cal R} \label{tensor2}
\end{equation}
where ${{\bm{\eta}}_l}$ and ${{\bm{\delta}}_l}$ are the $l$-th columns of ${\bf{H}}$ and ${\bf{\Delta}}$, respectively.
Note that since all subarrays are uniformly spaced, ${\bf{K}}$, ${\bf{\Delta}}$ and ${\bf{H}}$ are Vandermonde matrices and their vectors of generators can be respectively denoted by
\begin{equation}
\begin{aligned}
&{{\bm{\omega}}} \triangleq {\left[ {{e^{ - j\pi {\Delta_m}\sin {\theta _1}}},\cdots,{e^{ - j\pi {\Delta_m}\sin {\theta _L}}}} \right]^T}\\
&{{\bm{\omega}}_x} \triangleq {\left[ {{e^{ - j\pi {\Delta_{m_x}}v_1}},\cdots,{e^{ - j\pi {\Delta_{m_x}}v_1}}} \right]^T}\\
&{{\bm{\omega}}_y} \triangleq {\left[ {{e^{ - j\pi {\Delta_{m_y}}u_1}},\cdots,{e^{ - j\pi {\Delta_{m_y}}u_L}}} \right]^T}
\end{aligned}\label{generate}
\end{equation}
where the step sizes ${\Delta}_m = m_{s+1}-m_s$, ${\Delta_{m_x}} = m_{{i}+1}-m_{i}$, and ${\Delta_{m_y}} = m_{{j}+1}-m_{j}$. We assume ${{\bm{\omega}}}$, ${{\bm{\omega}}}_x$ and ${{\bm{\omega}}}_y$ are distinct, which means that multiple targets are spatially distinct.
\section{DOA Estimation via Tensor Decomposition with Vandermonde Factor Matrix}\label{3}
We have shown that the received signal of TB MIMO radar with transmit subarrays can be formulated as a high-order tensor. It is useful to point out that \eqref{tensor1} and \eqref{tensor2} are identical if the idea of tensor reshape is applied and ${\bf K}$ is replaced by ${\bf H} \odot {\bf \Delta}$. Hence, a general 4-order tensor model can be used to express the received signal for TB MIMO radar with uniformly spaced subarrays, given by
\begin{equation}
{\cal Z} \triangleq \left[ [{{\bf{G}},{\bf{X}},{\bf{B}},{\bf{C}}}
\right]]+{\cal R}\label{general}
\end{equation}
where ${\bf{G}}\in {\mathbb{C}^{{S} \times {L}}}$ is the {\it transmit subarray steering matrix}. Essentially, $\bf G$ can be interpreted as the result of element-wise spatial smoothing between the transmit elements. A new dimension is extended to express the phase rotations between transmit subarrays in the tensor model, which matches with the derivations in \eqref{tensor1} and \eqref{tensor2}.
The tensor decomposition of ${\cal Z}$ can be regarded as the constrained tensor decomposition, since one of the factor matrices is structured by the regular array configuration. Generally, the ALS algorithm can be applied to decompose such a tensor. However, the convergence of the ALS algorithm heavily relies on the determination of tensor rank, which is an NP-hard problem. The Vandermonde structure of the factor matrix is ignored. The number of iterations in ALS algorithm is also uncertain, which may lead to high computational complexity. In the literature \cite{28,29,35}, the uniqueness condition of the tensor decomposition with special-structured factor matrices, e.g., Toeplitz, Hankel, Vandermonde and column-wise orthonormal, has been investigated. The structured factor matrix may change the uniqueness condition and, therefore, point to some new tensor decomposition methods.
In this section, we mainly focus on the tensor decomposition with Vandermonde factor matrix in application to DOA estimation for TB MIMO radar with uniformly spaced subarrays. A computationally efficient DOA estimation method is proposed and we discuss the application of the proposed method for both linear and planar arrays.
To begin with, a 3-order tensor ${\cal F} \triangleq [[ {{\bf{G}},({\bf{X}} \odot {\bf{B}}), {\bf{C}})}]]$ can be reshaped from \eqref{general} (see \textbf{Fact~\ref{reshape}}), whose mode-3 unfolding is ${\bf F}_{(3)} = ({\bf G} \odot {\bf{X}} \odot {\bf{B}}){\bf C}^T$. Considering only the $q$-th pulse, ${\bf F}_{(3)}$ is generically identical to that in \eqref{ULAz} for linear array or \eqref{yq} for planar array. In other words, the signal covariance matrix-based DOA estimation methods like MUSIC and \-ESPRIT can be conducted by using ${\bf R} = 1/Q{\bf F}_{(3)}{\bf F}^H_{(3)}$ as the signal covariance matrix. Meanwhile, note that ${\bf{G}}$ is either a Vandermonde matrix or the KR product of a pair of Vandermonde matrices. Thus, \textbf{Lemma~\ref{111}} can be applied to conduct a tensor decomposition-based DOA estimation if the second precondition is satisfied, i.e., $r({\bf C}) = L$.
Take a ULA for example, let ${\bf G} = {\bf K}$. The SVD of ${{\bf{F}}_{(3)}}$ is denoted by ${{\bf{F}}_{(3)}} = {\bf{U}}{\bf \Lambda }{{\bf{V}}^H}$, where ${\bf{U}} \in {\mathbb{C}^{{SKN} \times L}}$, ${\bf{\Lambda}} \in {\mathbb{C}^{L \times L}}$, and ${\bf{V}} \in {\mathbb{C}^{{Q} \times L}}$. According to \textbf{Lemma~\ref{111}}, there must exist a nonsingular matrix $\bf M$ of size $L\times L$ such that
\begin{equation}
{\bf{UM}} = {\bf{K}} \odot {\bf{X}} \odot {\bf B}\label{19}
\end{equation}
or equivalently,
\begin{equation}
{{\bf{U}}_1}{\bf{M}} = {\bf{\overline K}} \odot {\bf{X}}\odot {\bf B}, \qquad {{\bf{U}}_2}{\bf{M}} = {\bf{\underline K}} \odot {\bf{X}}\odot {\bf B}\label{20}
\end{equation}
where submatrices ${{\bf{U}}_1}= \left[ {{{\bf{I}}_{KN(S - 1)}},{{\bf{0}}_{KN(S - 1) \times KN}}} \right]{\bf{U}}$ and ${{\bf{U}}_2}=\left[ {{{\bf{0}}_{KN(S - 1) \times KN}},{{\bf{I}}_{KN(S - 1)}}} \right]{\bf{U}}$ are truncated from rows of $\bf U$. Since $\bf K$ is a Vandermonde matrix, ${\bf{\underline K}} = {\bf{\overline K}}{{\bf \Omega}}$, where ${{\bf \Omega}} = diag({\bm \omega})$. Substitute it into \eqref{20} to obtain
\begin{equation}
{{\bf{U}}_2}{\bf{M}} = {{\bf{U}}_1}{\bf{M}}{{\bf \Omega}}.\label{21}
\end{equation}
Note that $\bf M$ and ${{\bf \Omega}}$ are both full rank, ${{\bf{U}}_2} = {{\bf{U}}_1}\left( {{\bf{M}}{{\bf{\Omega }}}{{\bf{M}}^{ - 1}}} \right)$. After the eigenvalue decomposition (EVD) of the matrix ${\bf{U}}_1^\dag {{\bf{U}}_2}$, ${{\bm{\omega}}}$ can be estimated as the vector of eigenvalues and $\bf M$ is the matrix of the corresponding eigenvectors. Then, the target DOA can be computed by
\begin{equation}
\begin{aligned}
& {\hat \omega}(l) = {e^{ - j\pi {\Delta _m}\sin {\bar \theta _l}}}\\
& {\Delta _m}\sin {\bar \theta _l} - {\Delta _m}\sin {{\bar \theta}^{'}_l} = \pm 2k \\
\end{aligned}\label{grating}
\end{equation}
where $k \in \left( { - \frac{{\Delta _m }}{2},\frac{{\Delta _m }}{2}} \right)$ is an integer, ${\bar \theta _l}$ is the true direction, and ${{\bar \theta}^{'}_l}$ denotes the potential grating lobes when $\Delta _m \geq 2$.
The estimation of ${\bm\omega}_x$ and ${\bm \omega}_y$ for planar array is straightforward, which can also be found in the proof of \textbf{Lemma~\ref{111}}. Consequently, $\hat u_l$ and $\hat v_l$ can be determined by
\begin{equation}
\left\{ \begin{array}{l}
{{\hat \omega }_y}(l) = {e^{ - j\pi {\Delta _{{m_y}}}{{\bar u}_l}}}\\
{\Delta _{{m_y}}}{{\bar u}_l} - {\Delta _{{m_y}}}{{\bar u'}_l} = \pm 2{k_y}
\end{array} \right.,\quad \left\{ \begin{array}{l}
{{\hat \omega }_x}(l) = {e^{ - j\pi {\Delta _{{m_x}}}{{\bar v}_l}}}\\
{\Delta _{{m_x}}}{{\bar v}_l} - {\Delta _{{m_x}}}{{\bar v'}_l} = \pm 2{k_x}
\end{array} \right.\label{grating2}
\end{equation}
where $k_y \in \left( { - \frac{{\Delta _{m_y} }}{2},\frac{{\Delta _{m_y} }}{2}} \right)$ and $k_x \in \left( { - \frac{{\Delta _{m_x} }}{2},\frac{{\Delta _{m_x} }}{2}} \right)$ are integers, respectively, $\bar u_l$ and $\bar v_l$ indicate the DOA information of $l$-th target, while ${{\bar u}^{'}_l}$ and ${{\bar v}^{'}_l}$ correspond to the potential grating lobes. Since $u_l \triangleq \sin\varphi_l\sin\theta_l$ and $v_l \triangleq \sin\varphi_l\cos\theta_l$, the pair of $(\hat \theta_l, \hat\varphi_l)$ can be denoted by
\begin{equation}
{\hat \theta _l} = \arctan \left(\frac{{{\bar u_l}}}{{{\bar v_l}}} \right), \qquad {\bar \varphi _l} = \arcsin \left(\sqrt {\bar u_l^2 + \bar v_l^2} \right). \label{DOA}
\end{equation}
The process in \eqref{19}-\eqref{21} can be regarded as the generalized ESPRIT method\cite{39}. Compared to other tensor-decomposition based methods like PARAFAC, the Vandermonde structure of the factor matrix is exploited and the computational complexity is reduced significantly. No iterations are required and the convergence is guaranteed.
However, the precondition $r({\bf C}) = L$ must be satisfied. In some applications regarding target detection, it may happen that two targets with similar Doppler shifts exist. Under this circumstance, two column vectors in $\bf C$ are considered to be linearly dependent. The rank deficiency problem limits the application of this computationally efficient DOA estimation method. Besides, the spatial ambiguity problem further restricts the placement of transmit elements. The distance of phase centers between two adjacent subarrays should be no more than half the working wavelength. The array aperture is limited. To tackle the problem of rank deficiency and obtain a higher spatial resolution, \eqref{general} is reshaped by squeezing $\bf B$ and $\bf C$ into one dimension. The third factor matrix ${\bf B} \odot {\bf C}$, as the KR product of a Vandermonde matrix and an arbitrary matrix, generically has rank $\min(QN,L)$.\footnote{Although there
exists no deterministic formula for the rank of the KR product of a Vandermonde matrix and an arbitrary matrix, it is generically full rank. See Appendix~\ref{222}.} Two targets with identical Doppler shift can be resolved, while the grating lobes can be eliminated by comparing the estimation result originated from ${\bf G}$ to the distinct target angular information obtained by ${\bf X}$ \cite{52,53}.
\subsection{Proposed Computationally Efficient DOA Estimation Method for TB MIMO Radar with Uniformly Spaced Transmit Subarrays}\label{key}
Consider the noise-free version of \eqref{general}, a 3-order tensor ${\cal T }\triangleq [[ {{\bf{G}},{\bf{X}},({\bf{B}}\odot{\bf{C}})}]]$ can be reshaped. The mode-3 unfolding of $\cal T$ is given by
\begin{equation}
{{\bf{T}}_{(3)}} = \left( {{\bf{G}} \odot {\bf{X}}} \right){\left( {{\bf{B}} \odot {\bf{C}}} \right)^T}\label{T3}
\end{equation}
where $\bf G$, $\bf X$, and ${\bf{B}} \odot {\bf{C}}$ are the three factor matrices, respectively. The receive steering matrix and Doppler steering matrix are squeezed into one dimension. Note that the generators of $\bf G$ are distinct, the directions of all targets are unique with or without the existence of grating lobes. Hence, the third factor matrix ${\bf{B}} \odot {\bf{C}}$ is column full rank \cite{29}. \textbf{Lemma~\ref{111}} holds for tensor $\cal T$. In the following, we develop methods for DOA estimation in TB MIMO radar with uniformly spaced subarrays via the decomposition of $\cal T$ for linear and planar array sequentially.
\subsubsection{ULA}
Let $\bf G = \bf K$. According to \textbf{Lemma~\ref{111}}, the decomposition of $\cal T$ is unique. To obtain the factor matrices with target DOA information, denote the SVD of ${{\bf{T}}_{(3)}}$ as ${{\bf{T}}_{(3)}} = {\bf{U}}{\bf \Lambda }{{\bf{V}}^H}$, where ${\bf{U}} \in {\mathbb{C}^{{SK} \times L}}$, ${\bf{\Lambda}} \in {\mathbb{C}^{L \times L}}$, and ${\bf{V}} \in {\mathbb{C}^{{NQ} \times L}}$. A nonsingular matrix $\bf E$ of size $L\times L$ satisfies
\begin{equation}
{\bf{UE}} = {\bf{K}} \odot {\bf{X}}.\label{UE}
\end{equation}
Owing to the operator of the KR product, we can write
\begin{equation}
{{\bf{U}}_1}{\bf{E}} = {\bf{\overline K}} \odot {\bf{X}}, \qquad {{\bf{U}}_2}{\bf{E}} = {\bf{\underline K}} \odot {\bf{X}}\label{rr}
\end{equation}
where ${{\bf{U}}_1}= \left[ {{{\bf{I}}_{K(S - 1)}},{{\bf{0}}_{K(S - 1) \times K}}} \right]{\bf{U}}$ and ${{\bf{U}}_2}=\left[ {{{\bf{0}}_{K(S - 1) \times K}},{{\bf{I}}_{K(S - 1)}}} \right]{\bf{U}}$ are truncated from rows of $\bf U$, respectively. Substitute ${\bf{\underline K}} = {\bf{\overline K}}{{\bf \Omega}}$ into \eqref{rr} to obtain
\begin{equation}
{{\bf{U}}_2}{\bf{E}} = {{\bf{U}}_1}{\bf{E}}{{\bf \Omega}}.\label{UK}
\end{equation}
Since $\bf E$ and ${{\bf \Omega}}$ are both full rank, ${{\bf{U}}_2} = {{\bf{U}}_1}\left( {{\bf{E}}{{\bf{\Omega }}}{{\bf{E}}^{ - 1}}} \right)$. The connection between ${\bf{U}}_1^\dag {{\bf{U}}_2}$ and ${{\bf{E}}{{\bf{\Omega }}}{{\bf{E}}^{ - 1}}}$ is revealed. From the EVD of ${\bf{U}}_1^\dag {{\bf{U}}_2}$, the generators ${{\bm{\omega}}}$ can be estimated with $\bf E$ being the matrix of the corresponding eigenvectors. Then, $\left\{ {{\hat \theta _{l}}} \right\}_{l = 1}^L$ can be computed by \eqref{grating}.
Note that $\left( {\frac{{{\bm{\kappa }}_l^H}}{{{\bm{\kappa }}_l^H{{\bm{\kappa }}_l}}} \otimes {{\bf{I}}_K}} \right)\left( {{{\bm{\kappa }}_l} \otimes {{\bm{\chi }}_l}} \right) = {{\bm{\chi }}_l}$ and ${{\bm{\kappa }}_l^H}{{\bm{\kappa }}_l} = S$, the compact form of ${{\bm{\chi }}_l}$ is given as
\begin{equation}
{{\bm{\chi }}_l} = 1/S \left( {{{\bm{\kappa }}_l^H} \otimes {{\bf{I}}_K}} \right){\bf{U}}{{\bf{e}}_l}.\label{K1}
\end{equation}
Equation~\eqref{K1} provides an estimation of each column vector of $\bf X$. Given ${\bf W}_0$ as prior information, a polynomial rooting method \cite{52} can be applied to estimate the unambiguous $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$ in ${\bf A}_0$ independently. Instead of exploiting the signal subspace shift-invariance of the transmit subarray steering matrix, the method in \cite{52} focuses on the Vandermonde structure of ${\bf{A}}_0$ within a single subarray and reveals the relationship between the TB MIMO radar transmit beampattern and the generalized sidelobe canceller (GSC). Consequently, the estimation results originated from ${\bf K}$ and ${\bf X}$ both provide the target angular information. The grating lobes can be eliminated by comparing the results to each other. Note that \eqref{K1} is conducted column by column, the angles are paired automatically before comparison.
An outline of the proposed method for the DOA estimation in TB MIMO radar with linear array is given as \textbf{Algorithm~\ref{alg1}}.
\begin{algorithm}[tb]
\caption{DOA Estimation for 1-D TB MIMO Radar with Uniformly Spaced Transmit Subarrays} \label{alg1}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Signal tensor ${\cal{Z}} \in {\mathbb{C}^{S \times K \times N \times Q}}$ from \eqref{tensor1}
\ENSURE ~~\\
Targets DOA information $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$
\STATE Reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}} \in {\mathbb{C}^{{S} \times K \times {NQ}}}$, where the mode-3 unfolding of ${\cal{T}}$ is given by \eqref{T3};
\STATE Compute the SVD of the matrix ${{\bf{T}}_{(3)}} = {\bf{U}}{\bf \Lambda }{{\bf{V}}^H}$;
\STATE Formulate two submatrices ${{\bf{U}}_1},{{\bf{U}}_2}$ satisfying \eqref{rr};
\STATE Calculate the EVD of the matrix ${\bf{U}}_1^\dag {{\bf{U}}_2}$;
\STATE Estimate ${\hat \theta}_l$ via \eqref{grating}, which contains grating lobes;
\STATE Construct ${{\bm{\chi }}_l}$ via \eqref{K1};
\STATE Define ${\bf{\tilde W}_0} \triangleq {\bf{W}}_0- {\bf W}'_0$, ${{\bf{W}}'_0} \triangleq {\left[ {{{\bm{\chi}}_l},{{\bf{0}}_{K \times \left( {{\rm{{M_0} - 1}}} \right)}}} \right]^T}$;
\STATE Build a polynomial via $F({z_l}) \triangleq {{\bf{p}}^H}({z_l}){\bf{\tilde W}}_0{{{\bf{\tilde W}}}_0^H}{\bf{p}}({z_l})$, where ${\bf{p}}(z_l) \triangleq {\left[ {1,z_l,\cdots,{{z_l}^{{M_0} - 1}}} \right]^T}$ and $z_l \triangleq e^{-j\pi\sin\theta_l}$;
\STATE Compute the roots of the polynomial $F({z_l})$ and select the one closest to the unit circle as $\hat z_l$;
\STATE Estimate ${\theta}_l $ via ${\hat \theta}_l = \arcsin \left(\frac{{j\ln ({\hat z_l})}}{\pi }\right)$;
\STATE Compare the results in step~5 and step~10;
\RETURN $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$.
\end{algorithmic}
\end{algorithm}
\subsubsection{URA}
First, substitute $ {\bf G } = {\bf H} \odot {\bf \Delta}$ into \eqref{T3}. Similar to \eqref{UE}, the SVD of ${{\bf{T}}_{(3)}}$ is ${{\bf{T}}_{(3)}} = {\bf{U}}{\bf \Lambda }{{\bf{V}}^H}$, and there is a nonsingular matrix ${\bf E} \in {\mathbb{C}^{ L \times L}}$ such that
\begin{equation}
{\bf{UE}} = {\bf H} \odot {\bf \Delta} \odot {\bf{X}}.
\end{equation}
Considering the KR product, the Vandermonde structure of both ${\bf H}$ and ${\bf \Delta}$ is exploited via
\begin{equation}
\begin{aligned}
& {{\bf{U}}_{\rm{2}}}{\bf{E}} = {\bf{\underline H }} \odot {\bf{\Delta}} \odot {\bf{X}} = \left( {{\bf{\overline H }} \odot {\bf{\Delta}} \odot {\bf{X}}} \right){{\bf{\Omega }}_y} = {{\bf{U}}_1}{\bf{E}}{{\bf{\Omega }}_y}\\
& {{\bf{U}}_{\rm{4}}}{\bf{E}} = {\bf{H }} \odot {\bf{\underline \Delta}} \odot {\bf{X}} = \left( {{\bf{H }} \odot {\bf{\overline \Delta}} \odot {\bf{X}}} \right){{\bf{\Omega }}_x} = {{\bf{U}}_3}{\bf{E}}{{\bf{\Omega }}_x}\\
\end{aligned}
\end{equation}
where ${{\bf \Omega}_y} = diag({{\bm\omega}_y})$, ${{\bm \Omega}_x} = diag({{\bm\omega}_x})$, ${{\bf{U}}_{\rm{1}}}$, ${{\bf{U}}_{\rm{2}}}$, ${{\bf{U}}_{\rm{3}}}$ and ${{\bf{U}}_{\rm{4}}}$ are the submatrices truncated from rows of ${\bf{U}}$, i.e.,
\begin{equation}
\begin{aligned}
&{{\bf{U}}_1}{\rm{ = }}\left[ {{{\bf{I}}_{{I}K({J} - 1)}},{{\bf{0}}_{{I}K({J} - 1) \times {I}K}}} \right]{\bf{U}}\\
&{{\bf{U}}_2}{\rm{ = }}\left[ {{{\bf{0}}_{{I}K({J} - 1) \times {I}K}},{{\bf{I}}_{{I}K({J} - 1)}}} \right]{\bf{U}}\\
&{{\bf{U}}_3} = \left( {{{\bf{I}}_{{J}}} \otimes \left[ {{{\bf{I}}_{K({I} - 1)}},{{\bf{0}}_{K({I} - 1) \times K}}} \right]} \right){\bf{U}}\\
&{{\bf{U}}_4} = \left( {{{\bf{I}}_{{J}}} \otimes \left[ {{{\bf{0}}_{K({I} - 1) \times K}},{{\bf{I}}_{K({I} - 1)}}} \right]} \right){\bf{U}}.\\
\end{aligned}\label{select}
\end{equation}
Like \eqref{UK}, the vectors ${{\bm{\omega}}_y}$ and ${{\bm{\omega}}_x}$ can be estimated as the collections of eigenvalues of ${\bf{U}}_1^\dag {{\bf{U}}_2}$ and ${\bf{U}}_3^\dag {{\bf{U}}_4}$, respectively, and $\bf E$ is the matrix of the corresponding eigenvectors. Then, the possible pairs of $(\theta_l, \varphi_l)$ can be computed by \eqref{grating2}-\eqref{DOA}. To eliminate the grating lobes, the relationship between the TB MIMO radar transmit beampattern and the GSC is applied again to estimate the target DOA in 2-D case \cite{53}. Specifically, $\left( {\frac{{{\bm{\kappa }}_l^H}}{{{\bm{\kappa }}_l^H{{\bm{\kappa }}_l}}} \otimes {{\bf{I}}_K}} \right)\left( {{{\bm{\kappa }}_l} \otimes {{\bm{\chi }}_l}} \right) = {{\bm{\chi }}_l}$ and ${{\bm{\kappa }}_l^H}{{\bm{\kappa }}_l} = S$ still hold by replacing ${{\bm{\kappa }}_l}$ with ${{\bm{h}}_l} \odot {{\bm{\delta}}_l}$. Hence, each column of $\bf X$ can be restored by
\begin{equation}
{{\bm{\chi }}_l} = 1/S\left[ {({{\bm{h}}_l} \odot {{\bm{\delta}}_l})^H \otimes {{\bf{I}}_K}} \right]{\bf{U}}{{\bf{e}}_l}.\label{K}
\end{equation}
Note that the TB matrix $ {\bf{W}}_0$ is given as a prior information, ${{\bm{\chi }}_l} = {\bf{W}}_0^H{\bm{\alpha }}_0(\theta_l ,\varphi_l )$ can be rewritten as $K$ different linear equations
\begin{equation}
{{{\chi }}_l}(k) = {\bf{w}}_k^H{\bm{\alpha }}_0(\theta_l ,\varphi_l ), \quad k = 1,2,\cdots, K \label{element}
\end{equation}
or equivalently, ${\bf{p}}_k^H{\bm{\alpha }}_0(\theta_l ,\varphi_l ) = 0$, where ${\bf{p}}_k \triangleq {\bf{w}}_k-[{{{\chi }}_l}(k) , {\bf 0}_{1 \times (M-1)}]$. It can be seen that the linear equations in \eqref{element} hold if and only if ${{{\left|\left| {{\bf{P}}^H{{\bf{a }}}({\hat \theta _l},{\hat\phi _l})}\right|\right|^2}}} = 0$, where ${\bf P} = {\bf W}-{\bf W}_0$, ${\bf W}_0 \triangleq [{{{\bm \chi}}_{l}}, {\bf 0}_{K \times (M-1)}]^T$. Therefore, the estimation of a pair $(\theta_l ,\varphi_l )$ can be found by solving the following convex optimization problem \cite{53}
\begin{equation}
\begin{aligned}
&\min \limits_{(\hat \theta_l,\hat \phi_l)} {{{\left|\left| {{\bf{P}}^H({\bf{u}}(\hat \theta_l ,\hat \varphi_l ) \otimes {\bf{v}}(\hat \theta_l ,\hat \varphi_l ))}\right|\right|^2}}}.\label{convex}
\end{aligned}
\end{equation}
whose structure is similar with the TB MIMO radar transmit beampattern. After obtaining the pair $(\hat u_l, \hat v_l)$, the distinct target DOA can be computed by \eqref{DOA}. The computation is conducted via \eqref{K} column by column, hence the independent estimates of target DOA from $\bf G$ and $\bf X$ are paired automatically. By comparing them, the grating lobes can be mitigated.
The primary procedures for the DOA estimation in TB MIMO radar with planar array is summarized as \textbf{Algorithm~\ref{alg2}}.
\begin{algorithm}[tb]
\caption{DOA Estimation for 2-D TB MIMO radar with Uniformly Spaced Transmit Subarrays} \label{alg2}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Signal Tensor ${\cal{Z}} \in {\mathbb{C}^{{J}\times {I} \times K \times N \times Q}}$ from \eqref{tensor2}
\ENSURE ~~\\
Targets DOA information $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$ and $\left\{ {{\varphi _{l}}} \right\}_{l = 1}^L$
\STATE Reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}} \in {\mathbb{C}^{{S} \times K \times {NQ}}}$, where the mode-3 unfolding of ${\cal{T}}$ is given by \eqref{T3};
\STATE Compute the SVD of the matrix ${{\bf{T}}_{(3)}} = {\bf{U}}{\bf \Lambda }{{\bf{V}}^H}$;
\STATE Formulate four submatrices ${{\bf{U}}_1},{{\bf{U}}_2},{{\bf{U}}_3},{{\bf{U}}_4}$ via \eqref{select};
\STATE Calculate the EVD of the matrices ${\bf{U}}_1^\dag {{\bf{U}}_2}$ and ${\bf{U}}_3^\dag {{\bf{U}}_4}$;
\STATE Estimate the pair $({\hat u}_l,\hat v_l)$ via \eqref{grating2} and compute the pair $(\hat \theta_l,\hat \phi_l)$ via \eqref{DOA};
\STATE Construct ${{\bm{\chi }}_l}$ via \eqref{K}, where $\bf E$ is the matrix of the corresponding eigenvectors in step~4;
\STATE Build the matrix $\bf P$ and solve the minimization problem \eqref{convex} to obtain the pair $(\hat u_l,\hat v_l)$;
\STATE Compute the unambiguous pair $(\hat \theta_l, \hat \varphi_l)$ via \eqref{DOA};
\STATE Compare the results in step~5 and step~8;
\RETURN $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$ and $\left\{ {{\varphi _{l}}} \right\}_{l = 1}^L$.
\end{algorithmic}
\end{algorithm}
\subsection{Parameter Identifiability}
As mentioned in \textbf{Fact~\ref{krank}}, the generical uniqueness condition of tensor decomposition for a high-order tensor is given as $\sum\limits_{n = 1}^N {{k_{{{\bf{A}}^{(n)}}}}} \ge 2L + (N - 1)$, where $N$ is the tensor order and $L$ is the tensor rank. The upper bound of the tensor rank, i.e., the maximum number of targets that can be resolved, rises with the increase of tensor order at the level of Kruskal-rank of a matrix. However, if the factor matrix has special structure, the uniqueness condition is changed. An example of tensor decomposition with Vandermonde factor matrix is described in \textbf{Lemma~\ref{111}}. It can be observed that the maximum number of targets that can be resolved by \eqref{general} is determined by the preconditions. The first precondition is that the first factor matrix of the tensor, as a Vandermonde matrix or the KR product of two Vandermonde matrices, must have distinct vector of generators. In this paper, we assume this condition holds since it means that each of the targets posses a unique direction, which is reasonable regarding DOA estimation problem.
The second precondition requires that the third factor matrix has rank $L$. Note that the Doppler steering vectors of any two targets with the same Doppler shift are linearly dependent with a scale difference determined by the target RCS. Two scenarios are discussed next.
When the target Doppler shifts are unique, \textbf{Lemma~\ref{111}} can be applied directly and the tensor $\cal F$ can be used. To ensure the uniqueness decomposition, it is required that \cite{29}
\begin{equation}
\min ((S-1)KN, Q) \geq L.
\end{equation}
In MIMO radar, the number of pulses during a single CPI is usually large. Thus, the maximum number of targets that can be resolved is generically $(S-1)KN$, which is better than that in \textbf{Fact~\ref{krank}} \cite{29}. However, the size of the transmit array is confined since the distance between phase centers of two adjacent subarrays must be no more than half the working wavelength to avoid the spatial ambiguity. This restriction degrades the spatial resolution and also raises the difficulty of the physical implementation of the array.
When there are at least two targets that have identical Doppler shift, tensor $\cal T$ is used to ensure the second precondition. The receive steering matrix is squeezed together with the Doppler steering matrix to distinguish targets with identical velocity. Although the rank of a specific tensor remains the same when it is reshaped, it was proved that different reshape manners are not equivalent from the performance point of view \cite{47}. In our case, it means that the identifiability is changed and, therefore, the uniqueness condition of decomposition for $\cal T$ requires
\begin{equation}
\min \left((S-1)K, NQ\right) \geq L.
\end{equation}
The usage of $\cal T$ is more appropriate for the general case, since the rank deficiency problem caused by identical target Doppler shift is solved. Additionally, by reshaping tensor $\cal Z$ into tensor $\cal T$, the angular information can be estimated independently from $\bf G$ and $\bf X$. The unambiguous estimation result from $\bf X$ provides a second estimation of the target DOA and can be used to eliminate the grating lobes. Thus, it can be concluded that the use of the tensor model $\cal T$ has at least the above two advantages. The maximum number of targets that can be resolved is reduced to $(S-1) \cdot K$. To improve the parameter identifiability, the increase of number of transmit subarrays or transmit waveforms is worth considering.
\section{Arbitrary but Identical Subarrays with Multiple Scales of Shift-invariances }\label{4}
In previous section, we have assumed that the transmit subarrays are uniformly spaced to obtain a Vandermonde structure in the factor matrix of designed tensor. However, such constraint on subarray structure can be relaxed. The placement of all subarrays needs not be uniform, while the configuration within a single subarray can be arbitrary. The tensor model in \eqref{general} is applicable for TB MIMO radar with any arbitrary but identical subarrays, since the extended factor matrix that represents the phase rotations between transmit subarrays is merely determined by the coordinates of the transmit subarray phase centers. The difference is that the array configuration varies the structure of the factor matrix, which may cause extra steps to recover the target DOAs. A typical example has been given earlier where the unambiguous spatial information in $\bf X$ is exploited to eliminate the cyclic ambiguity in $\bf G$.
In the following, we discuss two general cases that the transmit array with multiple scales of shift-invariances is placed on a lattice and explain the use of the proposed computationally efficient DOA estimation method in both scenarios.
\subsection{Generalized Vandermonde Matrix}
Note that the Vandermonde structure of the steering matrix ${\bf G}$ is linked to the phase rotations between the transmit subarrays, and it is exploited in a look up table for finding target DOAs. The Vandermonde structure is only a special case leading to phase rotation. Indeed, take, for example a linear array with its elements placed on a lattice, where all lattice cells are enumerated sequentially. The indices of the elements form a counted set of increasing positive integers. It can be shown that $m_s$ must be a subset of this set, since the first element of each subarray corresponds to a unique lattice cell. Hence, $m_s$ may increase uniformly or non-uniformly.
From \eqref{generate}, it can be observed that $m_s$ determines ${\bf K}$. When it rises uniformly, ${\bf K}$ should be a Vandermonde matrix.\footnote{This has been derived in Section~\ref{sig}, and we can find that the shift-invariance between different subarrays is related to the step size of $m_s$, or more specifically, the coordinates of the transmit subarray phase centers.} Determined by the step size of $m_s$, i.e., $\Delta_m$, the configuration of adjacent subarrays can be partly overlapped ($\Delta_m < M_0$) or non-overlapped ($\Delta_m \ge M_0$). The proposed DOA estimation method in last section can be used directly.
In the case when $m_s$ rises non-uniformly, let us consider as an example the tensor model \eqref{tensor1} with $S = 7$ subarrays and $m_s = \{1,2,3,5,6,7,9\}$. Each subarray contains three elements, therefore, the original transmit array is a ULA with $M = 11$ elements. Then, $\bf K$ is a {\it generalized Vandermonde matrix} \cite{32}, which can be written as ${\bf K} \triangleq [{\bf z}_1,\cdots, {\bf z}_L]$, where ${\bf z}_l \triangleq [1,z_l,z_l^2,z_l^4,z_l^5,z_l^6,z_l^8]^T$ and ${z_l} \triangleq {e^{ - j\pi \sin {\theta _l}}}$.
The idea of multiple invariance ESPRIT\cite{50} is introduced to conduct the DOA estimation with non-uniformly spaced transmit subarrays. Consequently, $\bf K$ can be interpreted as the combination of a set of submatrices ${\bf K}^{(sub)}$ denoting different sub-ULAs associated with various shift-invariances, i.e.,
\begin{equation}
\begin{aligned}
&{\bf K}^{(sub)} \triangleq \left[\left({\bf K}^{(1,1)}\right)^T, \left({\bf K}^{(2,1)}\right)^T,\left({\bf K}^{(1,2)}\right)^T\right]^T\\
&{\bf K}^{(1,1)}\triangleq \left[{\bf z}^{(1,1)}_1,\cdots, {\bf z}^{(1,1)}_L\right]\\
&{\bf K}^{(2,1)} \triangleq \left[{\bf z}^{(2,1)}_1,\cdots, {\bf z}^{(2,1)}_L\right]\\
&{\bf K}^{(1,2)} \triangleq \left[{\bf z}^{(1,2)}_1,\cdots, {\bf z}^{(1,2)}_L\right]
\end{aligned}\label{case}
\end{equation}
where ${\bf z}^{(1,1)}_l$ is selected from ${\bf z}_l$ with $m_s = \{1,2,3\}$, ${\bf z}^{(2,1)}_l$ is selected from ${\bf z}_l$ with $m_s = \{5,6,7\}$, and ${\bf z}^{(1,2)}_l$ is selected from ${\bf z}_l$ with $m_s = \{1,3,5,7,9\}$. In other words, ${\bf K}^{(1,1)}$ is a submatrix of $\bf K$ that consists of first three rows with shift-invariance $\Delta_m = 1$. The other two submatrices are analogous. Note that \eqref{case} is not the only subarray construction method, but it contains all transmit subarrays with a minimal distinct shift-invariance set $\Delta = \{\Delta_m | \Delta_m = 1,2\}$.
Substituting \eqref{case} to \eqref{T3}, we can write
\begin{equation}
{{\bf{T}}^{(sub)}_{(3)}} = \left( {{\bf{K}}^{(sub)} \odot {\bf{X}}} \right){\left( {{\bf{B}} \odot {\bf{C}}} \right)^T}.\label{Tsub}
\end{equation}
Its SVD is given as ${{\bf{T}}^{(sub)}_{(3)}} = {\bf U}^{(sub)}{\bf \Lambda}^{(sub)}\left({\bf V}^{(sub)}\right)^H$. It can be observed that \textbf{Lemma~\ref{111}} holds for \eqref{Tsub}. By constructing ${\bf K}^{(sub)}$, a new transmit subarray steering matrix that consists of several Vandermonde submatrices can be introduced. To exploit the Vandermonde structure, an extra row selection must be applied. Taking ${\bf{K}}^{(1,1)}$, for example, we can generalize \eqref{UE} to obtain
\begin{equation}
{{\bf{K}}^{(1,1)} \odot {\bf{X}}} = {\bf U}^{(1,1)}{\bf E}\label{40}
\end{equation}
where ${\bf U}^{(1,1)}$ is truncated from ${\bf U}^{(sub)}$ in the same way as ${\bf K}^{(1,1)}$ from ${\bf K}^{(sub)}$. Thus, each column of ${\bf K}^{(1,1)}$ can be estimated. The estimates of ${\bf K}^{(1,2)}$ and ${\bf K}^{(2,1)}$ can be obtained similarly. It is worth noting that if $\Delta_m >1$, the problem of grating lobes may still occur when recovering $\theta_l$ from ${\bf U}^{\left({N_{\Delta_m}},\Delta_m\right)}$, where $N_{\Delta_m}$ represents the number of subarrays whose shift-invariance is determined by $\Delta_m$. Usually, the unambiguous spatial information in $\bf X$ can be exploited to eliminate the potential grating lobes. However, it requires each subarray to be dense ULA, which restricts the aperture of the transmit subarray, and therefore, the spatial resolution.
Note that the generators of the Vandermonde submatrices in ${\bf K}^{(sub)}$ provide the target DOA information at different exponential levels, i.e., $z_l^{\Delta_m}$. Based on this, a polynomial function is designed to estimate the target DOA without using the second factor matrix ${\bf X}$.
For every possible shift-invariance $\Delta_m$, denote
\begin{equation}
\begin{aligned}
&{\bf a}_l^{(\Delta_m)} \triangleq \left[{ \overline {\bm \kappa}}_l^{(1, \Delta_m)T},\cdots, {\overline {\bm \kappa}}_l^{({N_{\Delta_m}}, \Delta_m)T}\right]^T\\
&{\bf b}_l^{(\Delta_m)} \triangleq \left[{\underline {\bm \kappa}}_l^{(1, \Delta_m)T},\cdots, {\underline {\bm \kappa}}_l^{({N_{\Delta_m}}, \Delta_m)T}\right]^T.\\
\end{aligned}\label{41}
\end{equation}
To illustrate \eqref{41}, consider the array structure in \eqref{case}. When $\Delta_m = 1$, there are two different submatrices/sub-ULAs, i.e., ${N_{1}} = 2$, ${\bf a}_l^{(1)} = \left[{ \overline {\bm \kappa}}_l^{(1, 1)T},{ \overline {\bm \kappa}}_l^{(2, 1)T}\right]^T = \left[1, z_l, z_l^4,z_l^5\right]^T$, and ${\bf b}_l^{(1)} = \left[{ \underline {\bm \kappa}}_l^{(1, 1)T},{ \underline {\bm \kappa}}_l^{(2, 1)T}\right] ^T= \left[z_l, z^2_l, z_l^5,z_l^6\right]^T$. When $\Delta_m = 2$, only one submatrix/sub-ULA exists, i.e., ${N_{2}} = 1$, ${\bf a}_l^{(2)} = { \overline {\bm \kappa}}_l^{(1,2)} = \left[1, z_l^2, z_l^4,z_l^6\right]^T$ and ${\bf b}_l^{(2)} = { \underline{\bm \kappa}}_l^{(1,2)} = \left[z_l^2, z_l^4,z_l^6,z_l^8\right]^T$. Also, the following constraint should be satisfied
\begin{equation}
{\bf a}_l^{(\Delta_m)}{{z}_l^{\Delta_m}} = {\bf b}_l^{(\Delta_m)}.\label{42}
\end{equation}
It is proved in \cite{32} that \eqref{42} can be achieved by rooting the polynomial function
\begin{equation}
f({z_l}) \triangleq \sum\limits_{{\Delta _m} \in \Delta } {\left| {\left| {{\bf{a}}_l^{({\Delta _m})}z_l^{{\Delta _m}} - {\bf{b}}_l^{({\Delta _m})}} \right|} \right|_F^2} \label{43}
\end{equation}
as long as two coprime numbers can be found in the shift-invariance set $\Delta$. By definition of $z_l$, the root nearest to the unit circle should be chosen as $\hat z_l$, which finally estimates the target DOA as ${\hat \theta}_l = \arcsin \left(\frac{{j\ln ({\hat z_l})}}{\pi }\right)$. The construction of ${\bf K}^{(sub)}$ enables the use of $\cal T$ in a more general scenario. The transmit subarrays can be organized in a non-uniform way. If the shift-invariance set $\Delta$ contains a pair of coprime integers, the problem of spatial ambiguity can be solved with no limitation on the transmit subarray structure. Hence, the structures of the transmit subarrays can be arbitrary but identical.
An outline of the proposed DOA estimation method for TB MIMO radar with non-uniformly spaced arbitrary but identical subarrays is summarized in \textbf{Algorithm~\ref{alg3}}.
\begin{remark}:
Multiple scales of shift-invariances can also be found in a Vandermonde matrix ${\bf K}$\cite{50}. A simple way to build ${\bf K}^{(sub)}$ is to concatenate the submatrices of ${\bf K}$, which, respectively, consist of the odd rows, even rows and all rows. Hence, \textbf{Algorithm~\ref{alg3}} is also applicable for TB MIMO radar with uniformly spaced transmit subarrays. Since the manifold of the subarray does need not to be dense ULA, it is possible to place the transmit array on a larger lattice to obtain a higher spatial resolution. If some elements in a transmit subarray are broken, a useful solution is to disable the elements in other subarrays accordingly to keep the manifolds identical. Moreover, we can select part of the elements in all subarrays to fulfill other purposes like communication in joint radar-communication system for example \cite{56}. These remarks can be extended to the case of planar array.
\end{remark}
\begin{algorithm}[tb]
\caption{DOA Estimation for TB MIMO radar with Non-Uniformly Spaced Arbitrary but Identical Subarrays} \label{alg3}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Signal Tensor ${\cal{Z}} \in {\mathbb{C}^{S \times K \times N \times Q}}$ from \eqref{general}
\ENSURE ~~\\
Targets DOA information $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$
\STATE Construct a new matrix ${\bf K}^{(sub)}$ in \eqref{case}, which can be divided into several Vandermonde submatrices and contains all transmit subarrays;
\STATE Update the transmit subarray steering matrix and reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}} \in {\mathbb{C}^{{S'} \times K \times {NQ}}}$;
\STATE Compute the SVD of the matrix ${{\bf{T}}^{(sub)}_{(3)}} = {\bf U}^{(sub)}{\bf \Lambda}^{(sub)}{\bf V}^{(sub)H}$;
\STATE Estimate each Vandermonde submatrix in ${\bf K}^{(sub)}$ sequentially via \eqref{UE}-\eqref{UK};
\STATE Build two vectors ${\bf a}_l^{(\Delta_m)}$ and ${\bf b}_l^{(\Delta_m)}$ from \eqref{41} for each column of estimated ${\bf K}^{(sub)}$;
\STATE Compute the roots of the polynomial function in \eqref{43} and select the root nearest to the unit circle as $\hat z_l$;
\STATE Estimate ${\hat \theta}_l = \arcsin \left(\frac{{j\ln ({\hat z_l})}}{\pi }\right)$;
\RETURN $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$.
\end{algorithmic}
\end{algorithm}
\subsection{Multiscale Sensor Array}\label{scale}
Another case when multiple scales of shift-invariances can be found is called multiscale sensor array \cite{43,45,46}. Generally, a URA can be regarded as a 2-level multiscale sensor array with different scales of shift-invariances. As shown in Fig.~\ref{sys}, the generation process of such a URA contains two steps. First, consider a single subarray composed of $M_0$ elements as the reference subarray. Let $I$ replica subarrays be placed uniformly across the $x$-axis, which form a larger subarray at a higher level. Then, $J$ copies of this higher level subarray are organized uniformly across the $y$-axis. Combining them together, a URA is constructed. Note that in this specific case, $I$ subarrays at level-1 are non-overlapped, while their $J$ counterparts at level-2 are partly overlapped.
From \eqref{tensor2}, it is clear that the transmit subarray steering matrices for subarrays at level-1 and level-2 are $\bf \Delta$ and $\bf H$, respectively. If the URA itself is repeated and spatially moved to other arbitrary but known locations, a new array that yields a larger spatial aperture is created. In this way, an $R$-level multiscale sensor array can be constituted. For any $S_r$ subarrays at level-$r$, $r = 1,2,\cdots,R$, define ${\bf G}^{(r)} \in {\mathbb{C}^{{S_r} \times L}}$ as the transmit subarray steering matrix. The overall transmit subarray steering matrix is given by
\begin{equation}
{\bf G} = {\bf G}^{(R)} \odot {\bf G}^{(R-1)} \odot \cdots \odot {\bf G}^{(1)} \triangleq \mathop \odot \limits_{r = 1}^R {{\bf{G}}^{(r)}}.\label{G}
\end{equation}
Substituting \eqref{G} into \eqref{general}, the tensor model for TB MIMO radar with an $R$-level miltiscale sensor array at transmit side is given. Note that this is an $(R+3)$-order tensor. The reshape of it and the use of \textbf{Lemma~\ref{111}} in this case are quite flexible. The DOA estimation can be conducted via \textbf{Algorithm~\ref{alg2}} or \textbf{Algorithm~\ref{alg3}} analogously.
Take a cubic transmit array,\footnote{Repeat the URA in Fig.~\ref{sys} $D$ times across the $z$-axis with coordinates $(0,0,m_d), d = 1,\cdots, D$.} for example. It is a 3-level multiscale sensor array, and we can immediately write the three transmit subarray steering matrices as ${\bf G}^{(1)} = {\bf\Delta}$, ${\bf G}^{(2)} = {\bf H}$ and ${\bf G}^{(3)} = {\bf \Xi}$, respectively, where ${\bf \Xi} \triangleq {\left[{\bm \tau}_1,{\bm\tau}_2,\cdots, {\bm \tau}_D\right]^T}_{ V\times L}$ and ${\bm\tau}_d \triangleq \left[e^{-j\pi(m_d-1)cos\varphi_1},e^{-j\pi(m_d-1)cos\varphi_2},\cdots, e^{-j\pi(m_d-1)cos\varphi_L}\right]^T$.
The parameter identifiability for different reshaped 3-order tensors ${\cal T} \in {\mathbb{C}^{{I_1} \times {I_2} \times {I_3}}}$ varies, which is determined by the uniqueness condition of tensor decomposition, i.e.,
\begin{equation}
\min ((I_1-1)I_2, I_3) \geq L \label{45}
\end{equation}
where $I_1$, $I_2$ and $I_3$ can be regarded as permutations of the set $\{S_1,S_2,\cdots,S_R,K,N,Q\}$, and $I_1I_2I_3 = KNQ\prod\limits_{r = 1}^R {{S_r}}$. From \eqref{45}, it is possible to estimate the target DOAs using only one single pulse. The transmit array with multiple scales of shift-invariances is exploited via the tensor reshape to make up for the lack of number of snapshots. This property can also be used to distinguish coherent sources. An example of two targets with identical Doppler shift has been discussed in Section~\ref{key}. See \cite{51} for more discussions about the partial identifiability of the tensor decomposition, where specific conditions for coherent or collocated sources are investigated.
\section{Simulation Results}\label{5}
In this section, we investigate the DOA estimation performance of the proposed method in terms of the root mean square error (RMSE) and probability of resolution of closely spaced targets for TB MIMO radar. Throughout the simulations, there are $Q = 50$ pulses in a single CPI. We assume that $L = 3$ targets lie within a given spatial steering vector determined by $\left\{ {{\theta _{l}}} \right\}_{l = 1}^L$ in linear array and $\left\{\left( {{\theta _{l}}}, {\varphi_l} \right) \right\}_{l = 1}^L$ in planar array, the normalized Doppler shifts are $f_1 = -0.1, f_2 = 0.2$ and $ f_3 = 0.2$. The number of Monte Carlo trials is $P = 200$. The RCS of every target is drawn from a standard Gaussian distribution, and obeys the Swerling I model. Note that the last two targets share identical Doppler shift, which cause $\bf C$ to drop rank. The noise signals are assumed to be Gaussian, zero-mean and white both temporally and spatially. The $K$ orthogonal waveforms are ${S_k}(t) = \sqrt {\frac{1}{{{T}}}} {e^{j2\pi \frac{k}{{{T}}}t}}, \, k=1, \cdots, K$. For both linear and planar array, the tensor model in \eqref{general} is used and the TB matrix is pre-designed \cite{12,31}.
For linear array, we assume a transmit ULA with $S = 8$ subarrays. Each transmit subarray has $M_0 = 10$ elements spaced at half the wavelength. The placement of transmit subarrays can vary from totally overlapped case to non-overlapped case. The number of transmit elements is computed by $M = {M_0}+{{\Delta}_m}(S-1)$. The receive array has $N=12$ elements, which are randomly selected from the transmit array. For planar array, the reference transmit subarray is a $7 \times 7$ URA. The number of subarrays is $S = 6$, where $J = 2$ and $I = 3$. The distances between subarrays in both directions are fixed as the working wavelength, which means that $\Delta_{m_x} = 2$ and $\Delta_{m_y} = 2$. A number of $N = 12$ elements in the transmit array are randomly chosen as the receive array.
For comparisons, ESPRIT-based algorithm \cite{16} that exploits the phase rotations between transmit subarrays and U-ESPRIT algorithm \cite{19} that utilizes the conjugate symmetric property of array manifold are used as signal covariance matrix-based DOA estimation methods, while conventional ALS algorithm \cite{21,27} that decomposes the factor matrices iteratively is utilized as signal tensor decomposition-based DOA estimation method. The Cramer-Rao lower bound (CRLB) for MIMO radar is also provided. For target DOAs estimated by the factor matrix $\bf X$, if applicable, we use a postfix to distinguish it, e.g., ALS-sub (Proposed-sub) refers to the estimation result computed by $\bf X$, while ALS (Proposed) denotes the estimation result originated from $\bf G$ after tensor decomposition.
\subsection{Example 1: RMSE and Probability of Resolution for Linear Array with Non-overlapped Subarrays}
Three targets are placed at $\theta_l = [-15^\circ,5^\circ,15^\circ]$. Consider the matricized form of $\cal Z$ in \eqref{general}. The goal is to estimate $\theta_l$ from ${\bf Z} = {\bf T}_{(3)}+{\tau}{\bf R}$, where ${\bf T}_{(3)}$ is given by \eqref{T3} and ${\bf G} = {\bf K}$, the SNR is measured as: $SNR[dB] = 10\log \left( {{{\left\| {{{\bf{T}}_{(3)}}} \right\|_F^2} \mathord{\left/
{\vphantom {{\left\| {{{\bf{T}}_{(3)}}} \right\|_F^2} {\left\| {\tau {\bf{R}}} \right\|_F^2}}} \right.\kern-\nulldelimiterspace} {\left\| {\tau {\bf{R}}} \right\|_F^2}}} \right)$. The RMSE is computed by
\begin{equation*}
RMSE = \sqrt {\frac{1}{{2PL}}\sum\limits_{l = 1}^L {\sum\limits_{p = 1}^P {{{\left( {{{\hat \theta }_l}(p) - {\theta _l}(p)} \right)}^2}} } }.
\end{equation*}
As shown in Fig.~\ref{fig1}, the RMSE results decline gradually with the rise of SNR for all methods. The ESPRIT-based algorithm merely exploits the phase rotations between transmit subarrays and therefore the performance is quite poor. U-ESPRIT algorithm performs better since the number of snapshots is doubled. For conventional ALS algorithm and our proposed method, target angular information can be obtained from both factor matrices $\bf K$ and $\bf X$, which are used to compare to each other to eliminate the potential grating lobes. The proposed method approaches the CRLB with a lower threshold as compared to the ALS method, since the Vandermonde structure of the factor matrix is exploited. Therefore, the proposed method performs better at low SNR. Note that the complexity of our proposed method is reduced significantly, it requires approximately the same number of flops as compared to that of the ALS method in a single iteration. Also, the comparison of the estimation results between $\bf G$ and $\bf X$ shows a reasonable difference. This is mainly caused by the different apertures of the subarray and the whole transmit array.
For the probability of resolution, we assume only two closely spaced targets located at $\theta_l = [-5^\circ,-6^\circ]$. These two targets are considered to be resolved when $\left\| {{{\hat \theta }_l} - {\theta _l}} \right\| \le \left\| {{\theta _1} - {\theta _2}} \right\|/2, l = 1,2$. The Doppler shifts are both $f = 0.2$ and the other parameters are the same as before.
In Fig.~\ref{fig2}, the probability of resolution results for all methods tested are shown and they are consistent with those in Fig.~\ref{fig1}. All methods achieve absolute resolution in high SNR region, and resolution declines with the decrease of SNR. The ESPRIT method presents the worst performance while performance of the U-ESPRIT improves slightly. The results of the ALS-sub method and the Proposed-sub method are almost the same. A gap of approximately 3~dB SNR can be observed between the proposed method and the ALS method, which means that our proposed method enables the lowest SNR threshold. The performance of both accuracy and resolution for our proposed method surpasses the other methods since the shift-invariance between and within different transmit subarrays are fully exploited.
\begin{figure}
\centering
\subfloat[RMSE versus SNR\label{fig1}]{
\includegraphics[width=0.5\linewidth]{ULA-RMSE.pdf}}
\hfill
\subfloat[Resolution versus SNR\label{fig2}]{
\includegraphics[width=0.5\linewidth]{ULA-Resolution.pdf}}
\\
\subfloat[RMSE versus SNR\label{fig5}]{
\includegraphics[width=0.5\linewidth]{URA-RMSE.pdf}}
\hfill
\subfloat[Resolution versus SNR\label{fig6}]{
\includegraphics[width=0.5\linewidth]{URA-Resolution.pdf}}
\caption{DOA estimation performance for TB MIMO radar with uniformly spaced subarrays for linear array (a)-(b) and planar array (c)-(d), 200 trials.}
\end{figure}
\subsection{Example 2: RMSE and Probability of Resolution for Planar Array with Partly Overlapped Subarrays}
In this example, three targets are placed at ${\theta _l} = [ -40^{\circ}, -30^{\circ}, -20^{\circ}]$ and ${\varphi _l} = [ 25^{\circ}, 35^{\circ}, 45^{\circ}]$. The signal model ${\bf Z} = {\bf T}_{(3)}+{\tau}{\bf R}$ is applied, where ${\bf T}_{(3)}$ is given by \eqref{T3} with ${\bf G} = {\bf H} \odot {\bf \Delta}$. The SNR is measured in the same way as that in linear array. The RMSE for planar array is compute by
\begin{footnotesize}
\begin{equation*}
RMSE = \sqrt {\frac{1}{{2PL}}\sum\limits_{l = 1}^L {\sum\limits_{p = 1}^P {\left[ {{{\left( {{{\hat \theta }_l}(p) - {\theta _l}(p)} \right)}^2} + {{\left( {{{\hat \varphi }_l}(p) - {\varphi _l}(p)} \right)}^2}} \right]} } }.
\end{equation*}
\end{footnotesize}
In Fig.~\ref{fig5}, the RMSEs of ESPRIT, U-ESPRIT, ALS and the proposed method are given. The CRLB is also provided. The performance of the ESPRIT method and the U-ESPRIT method are relatively poor. It is because of the ignorance of the received signal shift-invariance within a single subarray. It can be observed that the Proposed-sub and the ALS-sub successfully estimate the target DOAs via $\bf X$ in the case of planar array, which proves the validity of \eqref{convex}. The results can be used to mitigate the spatial ambiguity in the following estimations. Like their counterparts in linear array, the RMSEs of the proposed method and the ALS method are almost the same for above 0~dB SNR while the performance of the proposed method in low SNR is better. The ALS method ignores the Vandermonde structure during tensor decomposition. Compared to \eqref{convex}, the DOA estimation result in ${\bf G}$ takes advantage of a larger aperture and therefore achieves a better RMSE performance.
To evaluate the resolution performance, only two targets are reserved and the spatial directions are ${\theta _l} = [ -10^{\circ}, -11^{\circ}]$ and ${\varphi _l} = [ 15^{\circ}, 16^{\circ}]$. The resolution is considered successful if $\left\| {{{\hat \theta }_l} - {\theta _l}} \right\| \le \left\| {{\theta _1} - {\theta _2}} \right\|/2, \left\| {{{\hat \varphi }_l} - {\varphi _l}} \right\| \le \left\| {{\varphi _1} - {\varphi _2}} \right\|/2, l = 1,2$. The target Doppler shifts are the same, given as $f = 0.2$. The other parameters are unchanged.
Fig.~\ref{fig6} shows the results for all methods with respect to the probability of resolution. The proposed method achieves the lowest SNR threshold, which benefits from the fully exploitation of the shift-invariance and the Vandermonde structure during tensor decomposition. Note that the convergence of the ALS method is unstable and can be influenced by the tensor size. It can be observed that the resolution performance of the ALS method is deteriorated as compared to its counterpart in Fig.~\ref{fig2}. This conclusion implies that the robustness of our proposed method is better regarding 2-D DOA estimation, since no iterations are required.
\subsection{Example 3: RMSE Performance for Linear Array with Different ${\Delta}_m $}
In this example, we mainly consider the RMSE performance when $\Delta_m$ changes from one to at most $M_0$. The aperture is increased gradually. The SNR is assumed to be 10~dB. All other parameters are the same as those in Example~1.
Given the number of subarrays and the structure of a single subarray, the aperture of the overall transmit ULA rises with the increase of $\Delta_m$ while the number of elements shared by two adjacent subarrays declines. When $\Delta_m = 0$, this model is identical to that for conventional ESPRIT method \cite{12,31}, and there is no transmit subarray. When $\Delta_m$ rises, the distance between phase centers for two adjacent subarrays becomes larger than half the working wavelength and grating lobes are generated. The locations of these grating lobes are determined by \eqref{grating}, and can be eliminated. Meanwhile, the transmit array aperture is increased and the DOA estimation performance should be improved.
To investigate the improvement, the RMSEs of three targets are computed versus the rise of $\Delta_m$. It can be seen in Fig.~\ref{fig3} that the RMSE results decrease steadily with the increase of $\Delta_m$. The ESPRIT method and U-ESPRIT method suffer from grating lobes and the received signal within a single subarray is not fully exploited, hence, they perform poorly. The RMSEs of the Proposed-sub and the ALS-sub are almost unchanged since the estimation is only based on the subarray, which is fixed during the simulation. Meanwhile, the proposed method and the ALS method achieve better accuracy than their counterparts originated from $\bf X$ when $\Delta_m >3$. It can be noted in Fig.~\ref{fig1} that the convergence is satisfied for the ALS method and our proposed method when SNR is above 10~dB. Consequently, the RMSEs of the proposed method and the ALS method are nearly coincident.
To evaluate the RMSE performance versus $\Delta_{m_x}$ or $\Delta_{m_y}$ for a planar array, it is necessary to separately add a new subarray in one direction while keeping the array structure in the other direction unchanged. This can be fulfilled by constructing an L-shaped transmit array, where each element is replaced by a URA subarray. However, this analysis would be beyond the scope of this paper. In general, it can be concluded that the proposed method can estimate the target DOAs via the phase rotations between transmit subarrays. If the placement of two adjacent subarrays satisfies some conditions, e.g., $\Delta_m >3$ for linear array, the RMSE performance is better than that computed by a single subarray. Note that the received signal of two adjacent subarrays can be obtained by spatial smoothing \cite{32}, a proper spatial smoothing of the received signal can improve the DOA estimation performance.
\begin{figure}
\centering
\subfloat[RMSE versus ${\Delta}_m$\label{fig3}]{
\includegraphics[width=0.5\linewidth]{ULA-RMSE-Aperture.pdf}}
\hfill
\subfloat[Generalized Vandermonde matrix\label{fig4}]{
\includegraphics[width=0.5\linewidth]{RMSE-Generalized.pdf}}
\\
\subfloat[Elevation RMSE versus SNR\label{fig7}]{
\includegraphics[width=0.5\linewidth]{Elevation.pdf}}
\hfill
\subfloat[Azimuth RMSE versus SNR\label{fig8}]{
\includegraphics[width=0.5\linewidth]{Azimuth.pdf}}
\caption{ RMSE results for TB MIMO radar with different subarray configurations.}
\end{figure}
\subsection{Example 4: Generalized Vandermonde Factor Matrix for Linear Array with $m_s = \{1,2,3,5,7,9\}$}
Here we evaluate the proposed DOA estimation method for TB MIMO radar with non-uniformly spaced transmit subarrays. The transmit linear array has $S = 7$ subarrays with $m_s = \{1,2,3,5,6,7,9\}$. Each subarray contains $M_0 = 10$ elements. The $N = 12$ elements are randomly chosen from the transmit array to form the receive array. Three targets are placed at ${\theta_l} = [-5^\circ,10^\circ,18^\circ]$ with normalized Doppler shifts $f_l = [0.3,-0.15,-0.15]$. To simplify the signal model, each subarray is a ULA, which is not used during the DOA estimation in this example. Equations \eqref{case}-\eqref{43} can be applied directly, since the subarray structure stays identical. Two different transmit arrays are introduced for comparison to illustrate the improved performance provided by constructing ${\bf K}^{(sub)}$. Both of them can be regarded as a linear array with uniformly spaced subarrays ($\Delta_m = 1$). The first one has $S = 7$ subarrays, while the second one has $S = 9$ subarrays to achieve the same aperture. The DOA estimation for these two transmit arrays can be conducted by \textbf{Algorithm~\ref{alg1}}. Meanwhile, conventional ALS method can be applied to decompose the factor matrix ${\bf K}^{(sub)}$, which will be used to estimate the target DOAs by solving \eqref{43}. The generalized-ESPRIT (G-ESPRIT) method in \cite{46} is also used for comparison. The CRLBs of three different transmit arrays are also shown.
From Fig~\ref{fig4}, it can be observed that the formulation of ${\bf K}^{(sub)}$ exploits the multiple scales of shift-invariances in generalized Vandermonde matrix. By solving \eqref{43}, the grating lobes are eliminated efficiently. Hence, the structurs of transmit subarrays can be arbitrary but identical, which provide more flexibility for array design. The RMSE of the proposed method surpasses those of G-ESPRIT and ALS methods. Also, the performance of the non-uniformly spaced transmit subarrays is better than that of the uniformly spaced transmit subarrays (S = 7). This is expected since the aperture is increased due to sparsity. Compared to the fully spaced transmit subarray case (S = 9), the performance of the proposed method is deteriorated slightly. However, the fully spaced array can be extremely high-cost if the array aperture is further increased. By using the generalized Vandermonde matrix, the proposed method enables the sparsity in transmit array, which achieves higher resolution with less elements.
\subsection{Example 5: Multiscale Sensor Array with Arbitrary but Identical Subarrays}
In the final example, we illustrate the performance of the proposed DOA estimation method for TB MIMO radar with arbitrary but identical subarrays. Specifically, a planar array with $S = 4\times 4$ subarrays is considered, whose phase centers form a uniform rectangular grid with a distance of half the working wavelength. For each subarray, $M_0 = 4$ elements are randomly placed in a circle centered on the phase center with a radius of a quarter of wavelength. The structure of all subarrays are identical, hence, the transmit array can be regarded as an 3-level multiscale sensor array and the transmit subarray steering matrix can be obtained from \eqref{G}. The $N =12 $ receive elements are randomly selected from the transmit array. Three targets are placed at ${\theta _l} = [ -26^{\circ}, -19^{\circ}, -12^{\circ}]$ and ${\varphi _l} = [ 11^{\circ}, 21^{\circ}, 31^{\circ}]$. The other parameters are the same as those in Example~2.
Note that the subarray is arbitrary, the DOA information can only be estimated by the phase rotations between the transmit subarrays. Alternatively, the transmit array interpolation technique \cite{23} is introduced to map the original transmit array into a $4 \times 4$ URA to enable the ESPRIT-like DOA estimation, which is referred to as Inter-TEV in Fig.~\ref{fig7} and Fig.~\ref{fig8}. It can be observed that by carefully designing the mapping matrix, the RMSEs of the Inter-TEV method are better than those of ESPRIT and U-ESPRIT methods for both elevation and azimuth estimation. However, the proposed method surpasses the other methods with a lower RMSE. This is because of the full usage of the shift-invariance between and within different transmit subarrays.
\section{Conclusion}\label{6}
The problem of tensor decomposition with Vandermonde factor matrix in application to DOA estimation for TB MIMO radar with arbitrary but identical transmit subarrays has been considered. A general 4-order tensor that can be used to express the TB MIMO radar received signal in a variety of scenarios, e.g., linear and planar arrays, uniformly and non-uniformly spaced subarrays, regular and irregular subarrays, has been designed. The shift-invariance of the received signal between and within different transmit subarrays have been used to conduct DOA estimation. Specifically, a computationally efficient tensor decomposition method has been proposed to estimate the generators of the Vandermonde factor matrices, which can be used as a look-up table for finding target DOA. The proposed method fully exploits the shift-invariance of the received signal between and within different subarrays, which can be regarded as a generalized ESPRIT method. Comparing with conventional signal tensor decomposition-based techniques, our proposed method take advantage of the Vandermonde structure of factor matrices, and it requires no iterations or any prior information about the tensor rank. The parameter identifiability of our tensor model has also been studied via the discussion of the uniqueness condition of tensor decomposition. Simulation results have verified that the proposed DOA estimation method has better accuracy and higher resolution as compared to existing techniques.
\appendices
\setcounter{section}{0}
\setcounter{equation}{45}
\vspace{-3mm}
\section{Proof of \textbf{Lemma~\ref{111}}}\label{222}
First, let ${\bf A}^{(1)} \in {\mathbb{C}^{{I_1}\times L}} $ be a Vandermonde matrix with distinct generators, we have $r\left({\bf A}^{(1)} \odot {\bf A}^{(2)}\right) = \min({I_1I_2,L})$, since it is the KR product of a Vandermonde matrix and an arbitrary matrix~[17]. Assuming $I_3 \geq L, I_1I_2 \geq L$, the following results $r\left({\bf A}^{(1)} \odot {\bf A}^{(2)}\right) = L$ and $r({\bf A}^{(3)}) = L$ hold, since ${\bf A}^{(3)}$ has full column rank. The proof of \textbf{Lemma~\ref{111}} in this case is identical to that of Proposition~III.2 in~[17].
Next, let ${\bf A}^{(1)} = {\bf B} \odot {\bf C}$, where ${\bf B} \in {\mathbb{C}^{{J}\times L}}$ and ${\bf C} \in {\mathbb{C}^{{I}\times L}}$ are both Vandermonde matrices with distinct generators. Consider the rank of matrix ${\bf A}^{(1)} \odot {\bf A}^{(2)} = {\bf B} \odot {\bf C} \odot {\bf A}^{(2)} = {\bf \Pi}\left({\bf B} \odot {\bf A}^{(2)} \odot {\bf C}\right)$, where ${\bf \Pi}$ is an exchange matrix. Again, the rank of ${\bf B} \odot {\bf A}^{(2)}$ is $\min (JI_2, L)$ while $r\left({\bf B} \odot {\bf A}^{(2)} \odot {\bf C}\right) = \min (IJI_2, L)$. Since ${\bf \Pi}$ is nonsingular, $r\left({\bf A}^{(1)} \odot {\bf A}^{(2)}\right) = L$.
The mode-3 unfolding of $\cal Y$ is ${\bf Y}_{(3)} = \left({\bf A}^{(1)} \odot {\bf A}^{(2)}\right){\bf A}^{(3)T}$. The SVD of this matrix representation is denoted by ${\bf Y}_{(3)} = {\bf U}{\bf \Lambda}{\bf V}^H$, where ${\bf{U}} \in {\mathbb{C}^{{I_1I_2} \times L}}$, ${\bf{\Lambda}} \in {\mathbb{C}^{L \times L}}$, and ${\bf{V}} \in {\mathbb{C}^{{I_3} \times L}}$. Since $r\left({\bf A}^{(1)} \odot {\bf A}^{(2)}\right) = L$ and $r\left({\bf A}^{(3)}\right) = L$, it can be derived that a nonsingular matrix ${\bf E}\in {\mathbb{C}^{{L}\times L}}$ satisfies
\begin{equation}
{\bf U}{\bf E} = {\bf A}^{(1)} \odot {\bf A}^{(2)}.
\end{equation}
The Vandermonde structure of both ${\bf B}$ and ${\bf C}$ can be exploited via
\begin{equation}
\begin{aligned}
& {{\bf{U}}_{\rm{2}}}{\bf{E}} = {\bf{\underline B }} \odot {\bf{C}} \odot {\bf A}^{(2)} = \left( {{\bf{\overline B }} \odot {\bf{C}} \odot {\bf A}^{(2)}} \right){{\bf{\Omega }}_b} = {{\bf{U}}_1}{\bf{E}}{{\bf{\Omega }}_b}\\
& {{\bf{U}}_{\rm{4}}}{\bf{E}} = {\bf{B }} \odot {\bf{\underline C}} \odot {\bf A}^{(2)} = \left( {{\bf{B}} \odot {\bf{\overline C}} \odot {\bf A}^{(2)}} \right){{\bf{\Omega }}_c} = {{\bf{U}}_3}{\bf{E}}{{\bf{\Omega }}_c}\\
\end{aligned}
\end{equation}
where ${{\bf{\Omega }}_b} = diag({\bm \omega}_b)$, ${{\bf{\Omega }}_c} = diag({\bm \omega}_c)$ with ${\bm \omega}_b$ and ${\bm \omega}_c$ denoting the vectors of generators of $\bf B$ and $\bf C$, respectively. The submatrices ${{\bf{U}}_{\rm{1}}}$, ${{\bf{U}}_{\rm{2}}}$, ${{\bf{U}}_{\rm{3}}}$ and ${{\bf{U}}_{\rm{4}}}$ are truncated from rows of ${\bf{U}}$ according to the operator of the KR product, i.e.,
\begin{equation}
\begin{aligned}
&{{\bf{U}}_1}{\rm{ = }}\left[ {{{\bf{I}}_{{I}K({J} - 1)}},{{\bf{0}}_{{I}K({J} - 1) \times {I}K}}} \right]{\bf{U}}\\
&{{\bf{U}}_2}{\rm{ = }}\left[ {{{\bf{0}}_{{I}K({J} - 1) \times {I}K}},{{\bf{I}}_{{I}K({J} - 1)}}} \right]{\bf{U}}\\
&{{\bf{U}}_3} = \left( {{{\bf{I}}_{{J}}} \otimes \left[ {{{\bf{I}}_{K({I} - 1)}},{{\bf{0}}_{K({I} - 1) \times K}}} \right]} \right){\bf{U}}\\
&{{\bf{U}}_4} = \left( {{{\bf{I}}_{{J}}} \otimes \left[ {{{\bf{0}}_{K({I} - 1) \times K}},{{\bf{I}}_{K({I} - 1)}}} \right]} \right){\bf{U}}.\\
\end{aligned}
\end{equation}
Note that $\bf E$, ${\bf \Omega}_b$ and ${\bf \Omega}_c$ are full rank. We have ${\bf{U}}_1^\dag {{\bf{U}}_2} = {\bf{E}}{{\bf{\Omega }}_b}{\bf E}^{-1}$ and ${\bf{U}}_3^\dag {{\bf{U}}_4} = {\bf{E}}{{\bf{\Omega }}_c}{\bf E}^{-1}$. Hence, the vectors ${\bm \omega}_b$ and ${\bm \omega}_c$ can be computed as the collections of eigenvalues of ${\bf{U}}_1^\dag {{\bf{U}}_2}$ and ${\bf{U}}_3^\dag {{\bf{U}}_4}$, respectively, while $\bf E$ is the matrix of collection of the corresponding eigenvectors. From the generators of $\bf B$ and $\bf C$, the first factor matrix ${\bf A}^{(1)}$ can be reconstructed.
Meanwhile, it can be observed that
\begin{equation}
\left( {\frac{{{\bm{\alpha }}_l^{(1)H}}}{{{\bm{\alpha }}_l^{(1)H}{{\bm{\alpha }}_l^{(1)}}}} \otimes {{\bf{I}}_{I_2}}} \right)\left( {{{\bm{\alpha }}_l^{(1)}} \otimes {{\bm{\alpha }}_l^{(2)}}} \right) = {{\bm{\alpha }}_l^{(2)}}.
\end{equation}
Assume that the column vectors of ${\bf A}^{(1)}$ have unit form, then ${\bm \alpha}_l^{(2)}$ can be written as
\begin{equation}
{\bm \alpha}_l^{(2)} = ({\bm \alpha}_l^{(1)H} \otimes {\bf I}_{I_2}){\bf U}{\bf e}_l,\quad l = 1,2,\cdots, L
\end{equation}
where ${\bf e}_l$ is the $l$-th column of $\bf E$. Given ${\bf A}^{(1)}$ and ${\bf A}^{(2)}$, the third factor matrix can be computed by
\begin{equation}
\begin{aligned}
{\bf A}^{(3)T} = {\left( {\left( {{{\bf{A}}^{(1)H}}{{\bf{A}}^{(1)}}} \right) * \left( {{{\bf{A}}^{(2)H}}{{\bf{A}}^{(2)}}} \right)} \right)^{ - 1}} \\
\times {\left( {{{\bf{A}}^{(1)}} \odot {{\bf{A}}^{(2)}}} \right)^H}{{\bf{Y}}_{(3)}}.
\end{aligned}
\end{equation}
Therefore, the tensor decomposition of $\cal Y$ is generically unique, where ${\bf A}^{(1)}$ can be a Vandermonde matrix or the KR product of two Vandermonde matrices with distinct generators, and ${\bf A}^{(3)}$ is column full rank.
\section{Proof of \eqref{2Dunfolding}}\label{A}
Given ${{\bf{y}}^{(q)}_{s}}$ in \eqref{z}, $s = 1,2,\cdots, I,I+1, \cdots, IJ$, concatenate every $I$ vectors together to form totally $J$ matrices of identical dimension $KN \times I$. These matrices are denoted by ${\bf \bar Y}^{(q)}_{j}$, and are given as
\begin{equation}
{\bf \bar Y}^{(q)}_{j} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\left( {{\bf{c}}_q^T \odot {\bf{h}}^T_{{j}} \odot {\bf{\Delta}}} \right)^T} + {{\bf{\bar N}}^{(q)}_{j}}\label{xy}
\end{equation}
where ${{\bf{\bar N}}_j^{(q)}} \triangleq \left[ {{{\bf{n}}_{(j-1)I+1}^{(q)}},{{\bf{n}}_{(j-1)I+2}^{(q)}},\cdots,{{\bf{n}}_{(jI)}^{(q)}}} \right]$. The noise-free version of \eqref{xy} can be rewritten as
\begin{equation}
{\bf \bar Y}^{(q)}_{j} = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf{\Gamma }}_q{\left( { {{\bf{h}}^T_{{j}}} \odot {\bf{\Delta }} } \right)^T}
\end{equation}
where ${\bf{\Gamma }}_q = diag({\bf c}_q)$. Since $\left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf{\Gamma }}_q$ is fixed, the concatenation of $J$ matrices merely depends on the concatenation of ${\left( { {{\bf{h}}^T_{{j}}} \odot {\bf{\Delta }} } \right)^T}$. Define a matrix ${\bf \Theta}$
\begin{equation}
{\bf{\Theta }} \triangleq {\left[ {\begin{array}{*{20}{c}}
{{\bf{h}}_1^T \odot {\bf{\Delta }}}\\
{{\bf{h}}_2^T \odot {\bf{\Delta }}}\\
\vdots \\
{{\bf{h}}_J^T \odot {\bf{\Delta }}}
\end{array}} \right]_{S \times L}}
\end{equation}
such that ${\bf{\Theta }}$ performs the concatenation. From the definition of the KR product, it can be observed that ${\bf{\Theta }} = {\bf H} \odot {\bf \Delta}$. Therefore, the concatenation of ${\bf \bar Y}^{(q)}_{j}$ is given by
\begin{equation}
\begin{aligned}
{\bf \bar Y}^{(q)} &= \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\bf{\Gamma }}_q{\bf{\Theta }}^T\\
& = \left[ {\left( {{\bf{W}}_0^H{{\bf{A}}_0}} \right) \odot {\bf{B}}} \right]{\left( {{{\bf{c}}_q^T} \odot {\bf{H }} \odot {\bf{\Delta}}} \right)^T}.
\end{aligned}
\end{equation}
Considering the noise term, equation \eqref{2Dunfolding} can be given.
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,Ref}
\end{document} | {"config": "arxiv", "file": "2101.12666/AngleEstimationforBeamspaceMIMOwithVandermonde.tex"} |
TITLE: Evaluating $\int_0^2 \frac{x^2}{x^3 + 1} \,\mathrm{d}x$
QUESTION [0 upvotes]: I don't understand how to get from the first to the second step here and get $1/3$ in front.
In the second step $g(x)$ substitutes $x^3 + 1$.
\begin{align*}
\int_0^2 \frac{x^2}{x^3 + 1} \,\mathrm{d}x
&= \frac{1}{3} \int_{0}^{2} \frac{1}{g(x)} g'(x) \,\mathrm{d}x
= \frac{1}{3} \int_{1}^{9} \frac{1}{u} \,\mathrm{d}u \\
&= \left. \frac{1}{3} \ln(u) \,\right|_{1}^{9}
= \frac{1}{3} ( \ln 9 - \ln 1 )
= \frac{\ln 9}{3}
\end{align*}
REPLY [0 votes]: Looking at the denominator, you define $g(x)=x^3+1$. This means that $g'(x)=3x^2$.
Your goal is to manipulate the integrand so that it is in the form $g'(x)/g(x)$, that is, $3x^2/(x^3+1)$. As it stands, the factor $3$ you need is missing. You can't just throw it in, because that changes the integrand. But you can throw it in and pull it out at the same time, since that amounts to multiplying by $1$, which doesn't change anything:
$$\frac{x^2}{x^3-1}=\frac{\overbrace{(\frac13\cdot 3)}^1 x^2}{x^3-1}$$
$$=\frac{\frac13\cdot (3 x^2)}{x^3-1}$$
$$=\tfrac13\cdot \frac{3x^2}{x^3-1}$$
So you proceed from here.
Generally, if you need a missing constant factor of $c$ in the integrand, you can put it in if you also compensate by putting its reciprocal $\frac1c$ in as well. And remember, constant factors can be pulled to the outside of the integral. The rule is
$$\int f(x)\; dx = \frac1c\int cf(x)\; dx$$ | {"set_name": "stack_exchange", "score": 0, "question_id": 2909022} |
import data.real.nnreal
import topology.algebra.infinite_sum
import topology.instances.ennreal
import topology.algebra.monoid
open_locale nnreal big_operators
open finset
/-!
# A technical lemma on the way to `lem98`
The purpose of this file is to prove the following lemma:
```
lemma exists_partition (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) (hf' : summable f) :
∃ (mask : fin N → set ℕ),
(∀ n, ∃! i, n ∈ mask i) ∧ (∀ i, ∑' n, set.indicator (mask i) f n ≤ (∑' n, f n) / N + 1) :=
```
In disguise, this is `lem98` (`combinatorial_lemma/default.lean`) specialized to `Λ = ℤ`.
The proof of the general case makes a reduction to this special case.
## Informal explanation of the statement
The lemma `exists_partition` informally says the following:
Suppose we have a sequence of real numbers `f 0`, `f 1`, …, all between `0` and `1`,
and suppose that `c = ∑ (f i)` exists.
Then, for every positive natural number `N`, we can split `f` into `N` subsequences `g 1`, …, `g N`,
such that `∑ (g j i) ≤ c/N + 1`.
The informal proof is easy: consider `N` buckets, that are initially empty.
Now view the numbers `f i` as an incoming stream of numbers,
and place each of these in the buckets with the smallest total sum.
The formal proof is a bit trickier: we need to make sure that every number ends up in a bucket,
we need to show that the final subsequences have a converging sum, etc…
We model the subsqeuences by using indicator functions to mask parts of `f`
using `N` subsets of `ℕ` (`mask` in the statement).
In `recursion_data` below, we setup the `N` buckets,
and define the recursion step in `recursion_data_succ`.
The rest of the file consists of assembling the pieces.
-/
namespace combinatorial_lemma
/-- A data structure for recording partial sums of subsequences of a sequence of real numbers,
such that all the partial sums are rougly the same size.
The field `m` records to which partial sum the next entry in the sequence will be added. -/
structure recursion_data (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) (k : ℕ) :=
(m : fin N → Prop)
(hm : ∃! i, m i)
(partial_sums : fin N → ℝ≥0)
(h₁ : ∑ i, partial_sums i = ∑ n in range (k + 1), f n)
(h₂ : ∀ i, partial_sums i ≤ (∑ n in range (k + 1), f n) / N + 1)
[dec_inst : ∀ i, decidable (m i)]
attribute [instance] recursion_data.dec_inst
/-- The starting point for recursively constructing subsequences of a sequence of real numbers
such that all the subsequences sum to be roughly the same size:
we start by placing the first element of the sequence into the subsequence `0`. -/
def recursion_data_zero (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) :
recursion_data N hN f hf 0 :=
{ m := λ j, j = ⟨0, hN⟩,
hm := ⟨_, rfl, λ _, id⟩,
partial_sums := λ j, if j = ⟨0, hN⟩ then f 0 else 0,
h₁ := by simp only [sum_ite_eq', if_true, mem_univ, sum_singleton, range_one],
h₂ :=
begin
intros i,
split_ifs,
{ simp only [sum_singleton, range_one],
refine (hf 0).trans _,
exact self_le_add_left 1 (f 0 / ↑N) },
{ exact zero_le' }
end }
instance (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) :
inhabited (recursion_data N hN f hf 0) := ⟨recursion_data_zero N hN f hf⟩
/-- Given partial sums of subsequences up to the `k`-th element in a sequence of real numbers,
add the `k+1`st element to the smallest partial sum so far. -/
noncomputable def recursion_data_succ (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) (k : ℕ)
(dat : recursion_data N hN f hf k) :
recursion_data N hN f hf (k + 1) :=
let I := (finset.univ : finset (fin N)).exists_min_image
dat.partial_sums ⟨⟨0, hN⟩, finset.mem_univ _⟩ in
{ m := λ j, j = I.some,
hm := ⟨I.some, rfl, λ _, id⟩,
partial_sums := λ i, dat.partial_sums i + (if i = I.some then f (k + 1) else 0),
h₁ :=
begin
rw sum_range_succ _ (k + 1),
simp [finset.sum_add_distrib, dat.h₁, add_comm],
end,
h₂ :=
begin
intros i,
split_ifs,
{ rw h,
have : dat.partial_sums I.some * N ≤ (∑ n in range (k + 1 + 1), f n),
{ calc dat.partial_sums I.some * N
= ∑ i : fin N, dat.partial_sums I.some : _
... ≤ ∑ i, dat.partial_sums i : _ -- follows from I
... = ∑ n in range (k + 1), f n : dat.h₁
... ≤ ∑ n in range (k + 1 + 1), f n : _,
{ simp only [finset.sum_const, finset.card_fin, nsmul_eq_mul, mul_comm] },
{ obtain ⟨-, HI⟩ := I.some_spec,
apply finset.sum_le_sum,
intros j hj, exact HI j hj },
{ rw sum_range_succ _ (k + 1),
simp } },
have : dat.partial_sums I.some ≤ (∑ n in range (k + 1 + 1), f n) / ↑N,
{ rwa nnreal.le_div_iff_mul_le, exact_mod_cast hN.ne' },
exact add_le_add this (hf (k + 1)) },
{ calc dat.partial_sums i + 0
≤ (∑ n in range (k + 1), f n) / ↑N + 1 : by simpa using dat.h₂ i
... ≤ (∑ n in range (k + 1 + 1), f n) / ↑N + 1 : add_le_add _ le_rfl,
simp only [div_eq_mul_inv, fin.sum_univ_eq_sum_range],
refine mul_le_mul' _ le_rfl,
simp only [finset.sum_range_succ],
exact self_le_add_right _ _ }
end }
/-- Recursively construct subsequences of a given sequence of real numbers,
in such a way that the sums of the subsequences are all roughly of the same size. -/
noncomputable def partition (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) :
Π i : ℕ, (recursion_data N hN f hf i)
| 0 := recursion_data_zero N hN f hf
| (k + 1) := recursion_data_succ N hN f hf k (partition k)
lemma partition_sums_aux (k : ℕ) (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1)
(i : fin N) :
(partition N hN f hf (k + 1)).partial_sums i =
(partition N hN f hf k).partial_sums i +
if (partition N hN f hf (k + 1)).m i then f (k + 1) else 0 :=
by simp [partition, recursion_data_succ]
lemma partition_sums (k : ℕ) (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1)
(i : fin N) :
(partition N hN f hf k).partial_sums i =
∑ n in range (k + 1), set.indicator {k | (partition N hN f hf k).m i} f n :=
begin
induction k with k IH,
{ dsimp [partition], simp, dsimp [partition, recursion_data_zero], congr },
rw [partition_sums_aux, IH, sum_range_succ _ k.succ, set.indicator, add_right_inj],
congr' 1,
end
lemma exists_partition (N : ℕ) (hN : 0 < N) (f : ℕ → ℝ≥0) (hf : ∀ n, f n ≤ 1) (hf' : summable f) :
∃ (mask : fin N → set ℕ),
(∀ n, ∃! i, n ∈ mask i) ∧ (∀ i, ∑' n, set.indicator (mask i) f n ≤ (∑' n, f n) / N + 1) :=
begin
let mask : fin N → ℕ → Prop := λ i, {n | (partition N hN f hf n).m i},
have h_sum : ∀ k i, ∑ n in range k, set.indicator (mask i) f n ≤ (∑ n in range k, f n) / N + 1,
{ rintros ⟨k⟩ i,
{ simp [mask] },
rw ← partition_sums k N hN f hf i,
exact (partition N hN f hf k).h₂ i, },
refine ⟨mask, _, _⟩,
{ intros n, exact (partition N hN f hf n).hm },
{ intros i,
set S₁ : ℝ≥0 := ∑' (n : ℕ), f n,
have hf'' : has_sum f S₁ := hf'.has_sum,
have hf''' : has_sum _ (S₁ / N) := hf''.mul_right (N:ℝ≥0)⁻¹,
have : set.indicator (mask i) f ≤ f,
{ intros n,
dsimp [set.indicator],
split_ifs, exacts [le_rfl, zero_le'] },
obtain ⟨S₂, -, h_mask⟩ := nnreal.exists_le_has_sum_of_le this hf'',
rw h_mask.tsum_eq,
rw nnreal.has_sum_iff_tendsto_nat at hf''' h_mask,
have := filter.tendsto.add_const 1 hf''',
apply le_of_tendsto_of_tendsto' h_mask this,
intros n,
simp only [div_eq_mul_inv, finset.sum_mul] at h_sum ⊢,
exact h_sum n i }
end
end combinatorial_lemma
#lint-
| {"subset_name": "curated", "file": "formal/lean/liquid/combinatorial_lemma/partition.lean"} |
\begin{document}
\title[Multicomponent compressible fluids]{Mass transport in multicomponent compressible fluids: Local and global well-posedness in classes of strong solutions for general class-one models}
\author[D. Bothe]{Dieter Bothe}
\address{Mathematische Modellierung und Analysis, Technische Universit\"at Darmstadt, Alarich-Weiss-Str. 10, 64287 Darmstadt, Germany}
\email{[email protected]}
\author[P.-E. Druet]{Pierre-Etienne Druet}
\address{Weierstrass Institute, Mohrenstr. 39, 10117 Berlin, Germany}
\email{[email protected]}
\date{\today}
\subjclass[2010]{35M33, 35Q30, 76N10, 35D35, 35B65, 35B35, 35K57, 35Q35, 35Q79, 76R50, 80A17, 80A32, 92E20}
\keywords{Multicomponent flow, fluid mixture, compressible fluid, diffusion, reactive fluid, well-posedness analysis, strong solutions}
\thanks{This research is supported by the grant DR1117/1-1 of the German Science Foundation}
\maketitle
\begin{abstract}
We consider a system of partial differential equations describing mass transport in a multicomponent isothermal compressible fluid. The diffusion fluxes obey the Fick-Onsager or Maxwell-Stefan closure approach. Mechanical forces result into one single convective mixture velocity, the barycentric one, which obeys the Navier-Stokes equations. The thermodynamic pressure is defined by the Gibbs-Duhem equation. Chemical potentials and pressure are derived from a thermodynamic potential, the Helmholtz free energy, with a bulk density allowed to be a general convex function of the mass densities of the constituents.
The resulting PDEs are of mixed parabolic--hyperbolic type. We prove two theoretical results concerning the well-posedness of the model in classes of strong solutions: 1. The solution always exists and is unique for short--times and 2. If the initial data are sufficiently near to an equilibrium solution, the well-posedness is valid on arbitrary large, but finite time intervals. Both results rely on a contraction principle valid for systems of mixed type that behave like the compressible Navier-Stokes equations. The linearised parabolic part of the operator possesses the self map property with respect to some closed ball \emph{in the state space}, while being contractive \emph{in a lower order norm} only. In this paper, we implement these ideas by means of precise \emph{a priori} estimates in spaces of exact regularity.
\end{abstract}
\section{Mass transport for a multicomponent compressible fluid}
This paper is devoted to the mathematical analysis of general \emph{class-one models} of mass transport in isothermal multicomponent fluids. We are interested in the theoretical issues of unique solvability and continuous dependence (in short: well-posedness) in classes of strong solutions for the underlying PDEs. To start with, we shall expose the model very briefly. An extensive derivation from thermodynamic first principles is to find in \cite{bothedreyer}, or \cite{dreyerguhlkemueller}, \cite{dreyerguhlkemueller19} for the extension to charged constituents. There are naturally alternative modelling approaches: The reader who wishes exploring the model might for instance consult the references in these papers, or the book \cite{giovan}.
\textbf{Model for the bulk.} We consider a molecular mixture of $N\geq 2$ chemical species $\ce{A_1,\ldots,A_N}$ assumed to constitute a fluid phase.
The convective and diffusive mass transport of these species and their mechanical behaviour are described by the following balance equations:
\begin{alignat}{2}
\label{mass}\partial_t \rho_i + \divv( \rho_i \, v + J^i) & = r_i & & \text{ for } i = 1,\ldots,N\, ,\\
\label{momentum}\partial_t (\varrho \, v) + \divv( \varrho \, v\otimes v - \mathbb{S}(\nabla v)) + \nabla p & = \sum_{i=1}^N \rho_i \, b^i(x, \, t) & & \, .
\end{alignat}
The equations \eqref{mass} are the partial mass balances for the partial mass densities $\rho_1,\ldots,\rho_N$ of the species. We shall use the abbreviation $\varrho := \sum_{i=1}^N \rho_i$ for the total mass density. The barycentric velocity of the fluid is called $v$ and the thermodynamic pressure $p$.
In the Navier-Stokes equations \eqref{momentum}, the viscous stress tensor is denoted $\mathbb{S}(\nabla v)$. The vector fields $b^1,\ldots,b^N$ are the external body forces. The \emph{diffusions fluxes} $J^1,\ldots,J^N$, which are defined to be the non-convective part of the mass fluxes, must satisfy by definition the necessary side-condition $\sum_{i=1}^N J^i = 0$. Following the thermodynamic consistent Fick--Onsager closure approach described by \cite{bothedreyer}, \cite{dreyerguhlkemueller19} (older work in \cite{MR59, dGM63}), the diffusion fluxes $J^1,\ldots,J^N$ obey, in the isothermal case,
\begin{align}\label{DIFFUSFLUX}
J^i = - \sum_{j=1}^N M_{i,j}(\rho_1, \ldots, \rho_N) \, (\nabla \mu_j - b^j(x, \, t)) \, \text{ for } i =1,\ldots,N \, .
\end{align}
The \emph{Onsager matrix} $M(\rho_1, \ldots,\rho_N)$ is a symmetric, positive semi-definite $N\times N$ matrix for every $(\rho_1,\ldots,\rho_N) \in \mathbb{R}^N_+$. In all known linear closure approaches this matrix satisfies
\begin{align}\label{CONSTRAINT}
\sum_{i=1}^N M_{i,j}(\rho_1,\ldots,\rho_N) = 0 \quad \text{ for all } (\rho_1,\ldots,\rho_N) \in \mathbb{R}^N_+ \, .
\end{align}
One possibility to compute the special form of $M$ is for instance to invert the Maxwell--Stefan balance equations. For the mathematical treatment of this algebraic system, the reader can consult \cite{giovan}, \cite{bothe11}, \cite{justel13} or \cite{herbergpruess}. Or $M$ is constructed directly in the form $P^T \, M^{0} \, P$, where $M^{0}$ is a given matrix of full rank, and $P$ is a projector guaranteeing that \eqref{CONSTRAINT} is valid. The paper \cite{bothedruetMS} establishes equivalence relations between the Fick--Onsager and the Maxwell--Stefan constitutive approaches, so that we do not need here further specifying the structure of the tensor $M$.
The quantities $\mu_1,\ldots,\mu_N$ are the chemical potentials. The material theory which provides the definition of $\mu$ is based on the assumption that the Helmholtz free energy of the system possesses only a bulk contribution with density $\varrho\psi$. Moreover, this function possesses the special form
\begin{align*}
\varrho\psi = h(\rho_1, \ldots, \rho_N) \, ,
\end{align*}
where $h: \, \mathcal{D} \subseteq \mathbb{R}^N_+ \rightarrow \mathbb{R}$ is convex and sufficiently smooth in the range of mass densities relevant for the model. For the sake of simplicity, we shall in fact assume that $h$ possesses a smooth convex extension to the entire range of admissible mass densities $\mathbb{R}^N_+ = \{\rho \in \mathbb{R}^N \, : \, \rho_i > 0 \text{ for } i =1,\ldots,N\}$. The \emph{chemical potentials} $\mu_1,\ldots,\mu_N$ of the species are related to the mass densities $\rho_1, \ldots,\rho_N$ via
\begin{align}\label{CHEMPOT}
\mu_i = \partial_{\rho_i}h(\rho_1, \ldots, \rho_N) \, .
\end{align}
In \eqref{momentum}, the thermodynamic pressure has to obey the isothermal Gibbs-Duhem equation
\begin{align}\label{GIBBSDUHEM}
\sum_{i=1}^N \rho_i \, d\mu_i = dp \,
\end{align}
where $d$ is the total differential. This yields, up to a reference constant, a relationship between $p$ and the variables $\rho_1,\ldots,\rho_N$ which is often called the Euler equation:
\begin{align}\label{GIBBSDUHEMEULER}
p = -h(\rho_1,\ldots,\rho_N) + \sum_{i=1}^N \rho_i \, \mu_i = -h(\rho_1,\ldots,\rho_N) + \sum_{i=1}^N \rho_i\, \partial_{\rho_i}h(\rho_1, \ldots, \rho_N) \, .
\end{align}
For the mathematical theory of this paper, we do not need to assume a special form of the free energy density, but rather formulate general assumptions: The free energy function is asked to be a \emph{Legendre function} on $\mathbb{R}^N_+$ with surjective gradient onto $\mathbb{R}^N$.
For illustration, let us remark that the choices
\begin{itemize}
\item $h = k_B \, \theta \, \sum_{i=1}^N n_i \, \ln \frac{n_i}{n^{\text{ref}}}$;
\item $h = K\, F\left(\sum_{i=1}^N n_i \, \bar{v}_i^{\text{ref}}\right) + k_B \, \theta \, \sum_{i=1}^N n_i \, \ln \frac{n_i}{n}$;
\item $h = \sum_{i=1}^N K_i \, n_i\, \bar{v}_i^{\text{ref}} \, ((n_i\, \bar{v}_i^{\text{ref}})^{\alpha_i-1} + \ln (n_i\, \bar{v}_i^{\text{ref}})) + k_B \, \theta \, \sum_{i=1}^N n_i \, \ln \frac{n_i}{n}$;
\end{itemize}
are covered by the results of this paper. In these examples, $\theta > 0$ is the constant absolute temperature, $n_i := \rho_i/m_i$ is the number density ($m_i > 0$ the molecular mass of species $\ce{A_i}$), and $n = \sum_{i=1}^N n_i$ is the total number density. The first example models the free energy for a mixture of ideas gases with a reference value $n^{\text{ref}} > 0$. In the second example, the constants $\bar{v}_i^{\text{ref}}> 0$ are reference volumes introduced in \cite{dreyerguhlkelandstorfer} to explain solvatisation effects in electrolytes, $K >0$ is the compression module of the fluid, and $F$ is a general non-linear convex function related to volume extension. The third example, with constants $K_i > 0$ and $\alpha_i \geq 1$ shows that more complex state equations can as well be included in the setting. For the convenience of the reader, we briefly show in the Appendix, Section \ref{Legendre}, that the examples fit into the abstract framework of our well-posedness theorems. We also remark that stating assumptions directly on the thermodynamic potential $h$ is possible, because it is always possible to find this potential from the knowledge of the chemical potentials or of the state equation of the physical system, as shown in \cite{bothedreyer}, or in the upcoming paper \cite{bothedruetfe}.
Reaction densities $r_i = r_i(\rho_1,\ldots,\rho_N)$ or $r_i = r_i(\mu_1,\ldots,\mu_N)$ for $i = 1,\ldots,N$ will be considered in \eqref{mass} only for the sake of generality. We shall not enter the very interesting details of their modelling. We just note that these functions are likewise subject to the constraint $\sum_{i=1}^N r_i(\rho_1,\ldots,\rho_N) = 0$ for all $(\rho_1,\ldots,\rho_N) \in \mathbb{R}^N_+$, which expresses the conservation of mass by the reactions. As to the stress tensor $\mathbb{S}$, we shall restrict for simplicity to the standard Newtonian form with constant coefficients. The paper, however, provides methods which are sufficient to extend the results to the case of density and composition dependent viscosity coefficients.
\textbf{Boundary conditions.} We investigate the system \eqref{mass}, \eqref{momentum} in a cylindrical domain $Q_T := \Omega \times ]0,T[$ with a bounded domain $\Omega \subset \mathbb{R}^3$ and $T>0$ a finite time. It is possible to treat the case $\Omega \subset \mathbb{R}^d$ for general $d\geq 2$ with exactly the same methods.
We are mainly interested in results for the bulk operators. Thus, we shall not be afraid of some simplification concerning initial and boundary operators. A lot of interesting phenomena like mass transfer at active boundaries, or chemical reactions with surfactants, shall not be considered here but in further publications. Boundary conditions are also often the source of additional problems for the mathematical theory, like: Mixed boundary conditions, non-smooth boundaries, singular initial data. All this can, however, only be dealt with in the context of weak solutions, and is not our object here.
We consider the initial conditions
\begin{alignat}{2}\label{initial0rho}
\rho_i(x,\, 0) & = \rho^0_i(x) & & \text{ for } x \in \Omega, \, i = 1,\ldots, N \, ,\\
\label{initial0v} v_j(x, \, 0) & = v^0_j(x) & & \text{ for } x \in \Omega, \, j = 1,2,3\, .
\end{alignat}
For simplicity, we consider the linear homogeneous boundary conditions
\begin{alignat}{2}
\label{lateral0v} v & = 0 & &\text{ on } S_T := \partial \Omega \times ]0,T[ \, ,\\
\label{lateral0q} \nu \cdot J^i & = 0 & &\text{ on } S_T \text{ for } i = 1,\ldots,N \, .
\end{alignat}
\section{Mathematical analysis: state of the art and our results}\label{mathint}
The local or global existence and uniqueness of strong solutions to the class-one model exposed in the introduction has, from this point of view of generality, not yet been studied. More generally, there are relatively few published investigations with rigorous analysis about mass transport equations for a multicomponent system, with or without chemical reactions, being coupled to equations of Navier-Stokes type. In the theoretical study of this problem two different branches or disciplines of PDE analysis are meeting each other: diffusion--reaction systems and mathematical fluid dynamics.
The first fundamental observation in studying the system is that the differential operator generated by the mass transport equation is not parabolic. This is due to the condition \eqref{CONSTRAINT}, which implies that the second--order spatial differential operator possesses one zero eigenvalue. The total mass density satisfies the continuity equation $\partial_t \varrho + \divv(\varrho \, v) = 0$. One of the coordinates of the vector of unknowns behaves inherently hyperbolic.
One important question is how to deal with this hyperbolic component. Among the papers representing most important advances for the understanding of the field, we can mention \cite{herbergpruess} and \cite{chenjuengel}. The first paper is concerned with local-in-time well-posedness analysis for strong solutions, while the second deals with globally defined weak solutions for the Maxwell-Stefan closure of the diffusion fluxes. Both papers, however, rely on the same fundamental idea to eliminate the hyperbolic component by assuming $\varrho = \text{const.}$ (incompressibility). Under this condition, the Navier-Stokes equations reduce to their incompressible variant and decouple from the mass transport system. This system can be solved independently and re-expressed as a parabolic problem for the mass fractions $\rho_1/\varrho, \ldots,\rho_N/\varrho$ (in \cite{herbergpruess}) or for differences of chemical potentials (in \cite{chenjuengel}). In both cases there remains only $N-1$ independent variables. Let us briefly remark that the Navier-Stokes equations do not occur explicitly in \cite{herbergpruess} but are treated (in addition to other difficulties though) in \cite{bothepruess}. Note that in \cite{bothesoga}, a class of multicomponent mixtures has been introduced for which the use of the incompressible Navier-Stokes equation is more realistic: Incompressibility is assumed for the solvent only, and diffusion is considered against the solvent velocity.
In the context of compressible fluids, the global weak solution analysis of non-isothermal class-one models was initiated in \cite{feipetri08}, where a simplified diffusion law (diagonal, full-rank closure) was considered so that the problem of degenerate parabolicity is avoided. In \cite{mupoza15} and \cite{za15} the fluxes are calculated from a constitutive equation similar to \eqref{DIFFUSFLUX}, though for a special choice of the mobility matrix and of the thermodynamic potential. The global existence of weak solution is tackled by means of diverse stabilisation techniques and a tool called Bresch-Desjardins technique, which exploits a special dependence of viscosity coefficients on density to obtain estimates for the density gradient.
The first paper dealing with the full class-one model exposed in the introduction for more general thermodynamic potentials $h = h(\rho)$ and closure relations for diffusion fluxes and reaction terms is the investigation \cite{dredrugagu16} and the subsequent \cite{dredrugagu17a,dredrugagu17b,dredrugagu17c} in the context of charged carriers (electrolytes). There ideas of \cite{chenjuengel} were generalised in order to rewrite the PDEs as a coupled system with the following structure:
\begin{enumerate}[(a)]
\item \label{paran} A doubly non-linear parabolic system for $N-1$ variables $q_1, \ldots,q_{N-1}$ called relative chemical potentials (for instance $q_i := \mu_i-\mu_N$ for $i = 1,\ldots,N-1$);
\item \label{ns}The compressible Navier-Stokes equations with pressure $p = P(\varrho, \, q_1,\ldots, q_{N-1})$ to determine the variables $\varrho$ and $v$.
\end{enumerate}
The concrete form of this system is given by the equations
\begin{alignat*}{2}
\partial_t R(\varrho, \, q) + \divv( R(\varrho, \, q) \, v - \widetilde{M}(\varrho, \, q) \, (\nabla q - \tilde{b}) ) & = \tilde{r}(\varrho, \, q) \, ,& & \\
\partial_t \varrho + \divv(\varrho \, v) & = 0 \, ,& & \\
\partial_t (\varrho \, v) + \divv( \varrho \, v\otimes v - \mathbb{S}(\nabla v)) + \nabla P(\varrho, \, q) & = R(\varrho, \, q) \cdot \tilde{b}(x, \, t) + \varrho \, \bar{b}(x, \, t) & &
\end{alignat*}
in which $\widetilde{M} \in \mathbb{R}^{(N-1)\times (N-1)}$ is a \emph{positive} operator, the Jacobian $R_q \in \mathbb{R}^{(N-1)\times (N-1)}$ is likewise positive definite, and $\tilde{b}, \, \bar{b}, \, \tilde{r}$ are suitable transformations of the vector of bulk forces and of the reaction term. This formulation has many advantages, the most obvious one being that it allows to handle the total mass density with Navier-Stokes techniques and eliminates the tedious positivity constraints on the partial mass densities.
Applying these ideas, we were able to prove the global existence of certain weak solutions under the restriction that the non-zero eigenvalues of $M = M(\rho)$ remain strictly positive independently of the possible vanishing of species.
In this paper, we show that the reformulation based on \eqref{paran}, \eqref{ns} is also suited to study the local and global well-posedness for strong solutions, without restriction on the particular form of the free energy (inside of the assumption $h = h(\rho)$ with $h$ \emph{of Legendre type}, a notion to be defined below), and for general $M = M(\rho)$ symmetric and positively semi--definite of rank $N-1$. From the point of view of its structure, the reformulated system of equations consists of the compressible Navier-Stokes equations, coupled to a doubly nonlinear parabolic system of dimension $N-1$ for the unknown $q$. For $N =2$, the equation for $q$ is scalar, and we would face a variant of the so-called Navier-Stokes-Fourier system with $q$ playing the role of the temperature, and a density-dependent diffusion coefficient $\widetilde{M}$.
The general method used to study these systems in classes of strong or classical solutions is the contraction principle valid for short times or small perturbations of equilibrium solutions (a property sometimes improperly called 'small data'). We have to pay attention to the fact, though, that the parabolic contraction principle does not apply here in its pure form. There have been two types of attempts to study mixed systems like Navier-Stokes and Navier-Stokes-Fourier. The first method consists in passing to Lagrange coordinates, in terms of which there is an explicit inversion formula for the continuity equation. Then, the density is eliminated, and it is possible to study the parabolic part of the system with a nonlocal term. This is the approach exposed for instance in \cite{tani}, \cite{solocompress3} with short-time well-posedness results in the scale of H\"older spaces (see also \cite{solocompress2} for corresponding results without proofs in scale of Hilbert-spaces). The second method sticks to the Eulerian coordinates and it exploits precise estimates to control the growth of the solution. Early results for this approach are to be found in \cite{solocompress} for the Navier-Stokes operator in the Sobolev (non Hilbertian) scale of spaces, and in \cite{matsunishi,valli82,valli83,vallizaja} for the Sobolev-Hilbert scale. Further short comments on this type of literature are given after the statement of the main Theorems.
In our case, for $N > 2$ the parabolic system for $q$ is non-diagonal, but its linearised principal part in smooth points is still parabolic in the sense of Petrovki, normally elliptic in Amanns notation. This is clearly a nontrivial extension of the traditional problems of fluids mechanics. We shall study the problem in the class proposed in the paper \cite{solocompress} for Navier-Stokes: $W^{2,1}_p$ with $p$ larger than the space dimension for the components of the velocity and $W^{1,1}_{p,\infty}$ for the densities. For the new variable $q$ we also choose the parabolic setting of $W^{2,1}_p$. Within these classes we are able to prove the local existence and the semi-flow property for strong solutions. We shall also prove the global existence under the condition that the initial data are sufficiently near to an equilibrium (stationary) solution. Since this result foots on stability estimates in the state space, we however need to assume the higher regularity of the initial data in order to obtain some stability from the continuity equation. Thus, these solutions exist and are unique on arbitrary large time intervals, but they might not enjoy the extension property.
A further feature worth to mention is that in our treatment, the question of positivity of the mass densities is reduced to obtaining a $L^{\infty}$ estimate for the relative chemical potentials $q_1, \ldots ,q_{N-1}$ and a positivity estimate for the total mass density $\varrho$. This is a consequence of the fact that we recover $\rho_1, \ldots, \rho_N$ in the form of a continuous map $\mathscr{R}(\varrho, \, q)$ with range in $\mathbb{R}^N_+$. The positivity of a solution $\varrho$ to the continuity equation depends only on the smoothness of the velocity field $v$, while the $L^{\infty}$ bound for $q$ is a natural consequence of the choice of the state space. In this way, the question of positivity is entirely reduced to the smoothness issue, and strong solutions remain by definition positive as long as they are bounded in the state space.
At last we would like to mention that, while finishing this investigation, we became aware of the recent work \cite{piashiba18}. Here the authors study the short-time well-posedness for a model similar to the one considered in \cite{mupoza15,za15}, with certain restrictions to some particular choices for the thermodynamic potentials and kinetic matrix. The paper foots on the same change of variables as in \cite{dredrugagu16}, and it uses a reformulation similar to \eqref{paran}, \eqref{ns}. The problem is studied in the $L^pL^q$ parabolic setting by means of the Lagrange coordinate transformation. This approach provides interesting complements to the methods proposed in the present paper, and vice versa.
\subsection{Main results}
We assume that $\Omega \subset \mathbb{R}^3$ is a bounded domain and $T >0$. We denote $Q= Q_T = \Omega \times ]0,T[$.
In order to formulate our results, we first recall a few notations and definitions. At first for $\ell = 1, \, 2, \ldots$ and $1 \leq p \leq +\infty$ we introduce the anisotropic/parabolic Sobolev spaces
\begin{align*}
W^{2\ell, \ell}_p(Q) := & \{ u \in L^p(Q) \, : \, D_t^{\beta}D^{\alpha}_x u \in L^p(Q) \, \forall\,\, 1 \leq 2 \, \beta + |\alpha| \leq 2 \, \ell \} \, ,\\
\|u\|_{W^{2\ell,\ell}_p(Q)} := & \sum_{0\leq 2 \, \beta + |\alpha| \leq 2 \, \ell} \|D_t^{\beta}D^{\alpha}_x u \|_{L^p(Q)}
\end{align*}
and, with a further index $1 \leq r < \infty$, the spaces
\begin{align*}
W^{\ell}_{p,r}(Q) = W^{\ell,\ell}_{p,r}(Q) := & \{u \in L^{p,r}(Q) \, : \,\sum_{0\leq \beta + |\alpha| \leq \ell} D^{\alpha}_x \, D^{\beta}_t u \in L^{p,r}(Q) \} \, ,\\
\|u\|_{W^{\ell,\ell}_{p,r}(Q)} := & \sum_{0\leq \beta + |\alpha| \leq \ell} \|D_t^{\beta}D^{\alpha}_x u \|_{L^{p,r}(Q)} \, .
\end{align*}
Let us precise that in these notations the space integration index always comes first. For $r = + \infty$, $W^{\ell,\ell}_{p,\infty}(Q)$ denotes the closure of $C^{\ell}(\overline{Q})$ with respect to the norm above, and thus
\begin{align*}
W^{\ell,\ell}_{p,\infty}(Q) := & \{u \in L^{p,\infty}(Q) \, : \,\sum_{0\leq \beta + |\alpha| \leq \ell} D^{\alpha}_x \, D^{\beta}_t u \in C([0,T]; \, L^p(\Omega)) \} \, .
\end{align*}
We moreover need the concept of \emph{essential smoothness} for a proper, convex function $f: \, \mathbb{R}^N \rightarrow \mathbb{R}$ (see \cite{rockafellar}, page 251).
For an essentially smooth, strictly convex function with open domain, the operation of conjugation is identical with applying the classical Legendre transform. We will therefore call $h: \, \mathbb{R}^N_+ \rightarrow \mathbb{R}$ a \emph{Legendre function} if it belongs to $C^1(\mathbb{R}^N_+)$, is strictly convex, and if $|\nabla_{\rho} h(\rho)| \rightarrow +\infty$ for $\rho \rightarrow \partial \mathbb{R}^N_+$. If the function $h$ is moreover \emph{co-finite} (\cite{rockafellar}, page 116), the gradient mapping $\nabla h$ is invertible between the domain of $h$ and the entire space $\mathbb{R}^N$. Typical free energy densities $h$ are co-finite functions of Legendre type as shown in Appendix, Section \ref{Legendre}.
Due to \eqref{CONSTRAINT}, the diffusion system \eqref{mass} is not parabolic. The matrix $M(\rho) \, D^2h(\rho)$ possesses only $N-1$ positive eigenvalues that moreover might degenerate for vanishing species. There are therefore only $N-1$ 'directions of parabolicity' of the mass transport equations. In order to extract them, we shall need the following standard projector in $\mathbb{R}^N$:
\begin{align*}
\mathcal{P}: \mathbb{R}^N \rightarrow \{1^N\}^{\perp} \, , \quad \mathcal{P} := \text{Id}_{\mathbb{R}^N} - \frac{1}{N} \, 1^N \otimes 1^N \, .
\end{align*}
Let us introduce also
\begin{align*}
\mathbb{R}^N_+ := & \{\rho = (\rho_1, \ldots, \rho_N) \in \mathbb{R}^N \, : \, \rho_i > 0 \text{ for } i = 1,\ldots,N\} \, ,\\
\overline{\mathbb{R}}^N_+ := & \{\rho = (\rho_1, \ldots, \rho_N) \in \mathbb{R}^N \, : \, \rho_i \geq 0 \text{ for } i = 1,\ldots,N\} \, .
\end{align*}
Our first main Theorem is devoted to the short-time existence of a strong solution.
\begin{theo}\label{MAIN}
We fix $p > 3$, and we assume that \begin{enumerate}[(a)]
\item $\Omega \subset \mathbb{R}^3$ is a bounded domain of class $\mathcal{C}^2$;
\item $M: \, \mathbb{R}^N_+ \rightarrow \mathbb{R}^{N\times N}$ is a mapping of class $C^2(\mathbb{R}^{N}_+; \, \mathbb{R}^{N\times N})$ into the positive semi-definite matrices of rank $N-1$ with constant kernel $1^N = \{1, \ldots, 1\}$;
\item $h: \, \mathbb{R}^N_+ \rightarrow \mathbb{R}$ is of class $C^3(\mathbb{R}^{N}_+)$, and is a co-finite function of Legendre type in its domain $\mathbb{R}^N_+$;
\item $r: \, \mathbb{R}^N_+ \rightarrow \mathbb{R}^{N}$ is a mapping of class $C^1(\mathbb{R}^{N}_+)$ into the orthogonal complement of $1^N$;
\item \label{force} The external forcing $b$ satisfies $\mathcal{P} \, b \in W^{1,0}_p(Q_T; \, \mathbb{R}^{N\times 3})$ and $b - \mathcal{P} \, b \in L^p(Q_T; \, \mathbb{R}^{N\times 3})$. For simplicity, we assume that $\nu(x) \cdot \mathcal{P} \,b(x, \, t) = 0$ for $x \in \partial \Omega$ and $\lambda_1-$almost all $t \in ]0, \, T[$.
\item The initial data $\rho^0_{1}, \ldots \rho^0_{N}: \, \Omega \rightarrow \mathbb{R}_+$ are strictly positive measurable functions satisfying the following conditions:
\begin{itemize}
\item The initial total mass density $\varrho_0 := \sum_{i=1}^N \rho_{i}^0$ is of class $W^{1,p}(\Omega)$;
\item There is $m_0 > 0$ such that $ 0 < m_0 \leq \varrho_0(x)$ for all $x \in \Omega$;
\item The vector defined via $\mu^0 := \partial_{\rho}h(\theta, \, \rho^0_{1}, \ldots \rho^0_{N})$ (initial chemical potentials) satisfies $\mathcal{P} \,\mu^0 \in W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^N)$;
\item The compatibility condition $\nu(x) \cdot \mathcal{P} \nabla\mu^0(x) = 0$ is valid in $W^{1-\frac{3}{p}}_p(\partial \Omega; \, \mathbb{R}^N)$ in the sense of traces;
\end{itemize}
\item The initial velocity $v^0$ belongs to $W^{2-\frac{2}{p}}_{p}(\Omega; \, \mathbb{R}^3)$ with $v^0 = 0$ in $W^{2-\frac{3}{p}}_{p}(\partial \Omega; \, \mathbb{R}^3)$.
\end{enumerate}
Then, there exists $0 < T^* \leq T$ such that the problem \eqref{mass}, \eqref{momentum} with closure relations \eqref{DIFFUSFLUX}, \eqref{CHEMPOT}, \eqref{GIBBSDUHEMEULER} and boundary conditions \eqref{initial0rho}, \eqref{initial0v}, \eqref{lateral0v}, \eqref{lateral0q} possesses a unique solution in the class
\begin{align*}
\rho \in W^{1}_{p}(Q_{T^*}; \, \mathbb{R}^N_+), \quad v \in W^{2,1}_p(Q_{T^*}; \, \mathbb{R}^3) \, ,
\end{align*}
such that, moreover, $\mu := \partial_{\rho}h(\theta, \, \rho)$ satisfies $ \mathcal{P} \,\mu \in W^{2,1}_p(Q_{T^*}; \, \mathbb{R}^N)$. The solution can be uniquely extended to a larger time interval whenever there is $\alpha > 0$ such that
\begin{align*}
\|\mathcal{P} \,\mu\|_{C^{\alpha,\frac{\alpha}{2}}(Q_{T^*})} + \|\nabla (\mathcal{P} \,\mu)\|_{L^{\infty,p}(Q_{T^*})} + \|v\|_{L^{z \, p,p}(Q_{T^*})} + \int_{0}^{T^*} [\nabla v(s)]_{C^{\alpha}(\Omega)} \, ds < + \infty \, .
\end{align*}
Here $z = z(p)$ satisfies $z = \frac{3}{p-2}$ for $3 < p < 5$, $z > 1$ arbitrary for $p = 5$, and $z = 1$ for $p > 5$.
\end{theo}
\begin{rem}
\begin{itemize}
\item A solution $(\rho, \ v)$ in the sense of Theorem \ref{MAIN} is strong: The equations \eqref{mass}, \eqref{momentum} are valid pointwise almost everywhere in $Q_{T^*}$. To see this one uses that \eqref{CONSTRAINT} implies the identity
\begin{align*}
J = - M(\rho) \, \nabla \mu = - M(\rho) \, \mathcal{P} \, \nabla \mu \, .
\end{align*}
Since Theorem \ref{MAIN} establishes parabolic regularity for $ \mathcal{P} \,\mu$, the contributions $\divv J$ are well defined in $L^p(Q_{T^*})$.
\item The result carries over to the case where the potential $h$ has a smaller domain: $h: \, \mathcal{D} \subset \mathbb{R}^N_+ \rightarrow \mathbb{R}$, provided that $\mathcal{D}$ is open, and that $h$ is \emph{of Legendre type} in $\mathcal{D}$\footnote{The concept of a function of Legendre type on an open set is defined \cite{rockafellar}, Th. 26.5. This is more exact then speaking of a Legendre function, even if we shall also employ this terminology if the context is unequivocal.}.
The initial data must satisfy $\rho^0 \in \mathcal{D}$. The maximal existence time is then further restricted by the distance of $\rho^0$ to $\partial \mathcal{D}$. The case that $h$ is not co-finite, which means that the image of $\nabla_{\rho} h$ is a true subset of $\mathbb{R}^N$, corresponds to constraints affecting the chemical potentials, which we do not wish to discuss further here.
\end{itemize}
\end{rem}
\begin{rem}
Other functional space settings are applicable:
\begin{itemize}
\item The parabolic H\"older-space scale $C^{2+\alpha, \, 1+\frac{\alpha}{2}}$ seems to be very natural. It was applied successfully to the compressible Navier-Stokes equations with energy equation: see \cite{tani} and \cite{solocompress3} for a (unfortunately very short) proof of local well-posedness;
\item The Hilbert--space scale $W^{2\ell,\ell}_2$ with $\ell$ sufficiently large. In the latter approach one uses the conservation law structure of the system to derive \emph{a priori} bounds for higher derivatives of the solution in $L^2$. For several variants of the method in the case of Navier-Stokes or Navier-Stokes-Fourier, see \cite{matsunishi,valli82,valli83,vallizaja}, \cite{solocompress2}, or also \cite{cho,hoff,bellafei} and references. Usually, somewhat more regularity of the domain and the coefficients is demanded because it is necessary to differentiate the equations several times.
\item In \cite{feinovsun} the Navier-Stokes-Fourier system was also studied in classes of higher square--integrable derivatives. In this case the maximal existence time can be characterised by a weaker criterion. Indeed, the boundedness of the velocity gradient suffices to guarantee that the solution can be extended.
\end{itemize}
Proving the local well-posedness for the mixture case in these classes should be possible 'under suitable modifications'. The quotation marks hint toward a substantial problem: The principal part of the parabolic system for the variables $q_1,\ldots,q_{N-1}$ is non-diagonal for $N > 2$. This might be an obstacle to simply transferring the results.
\end{rem}
Our second main result concerns global existence under suitable restrictions for the data. Here the concept of an equilibrium solution is first needed. An equilibrium solution for \eqref{mass}, \eqref{momentum} is defined as a vector $(\rho_1^{\text{eq}}, \ldots, \rho_N^{\text{eq}}, \, v_1^{\text{eq}}, \, v_2^{\text{eq}}, \, v_3^{\text{eq}})$ of functions defined in $\Omega$ with
\begin{align*}
\rho^{\text{eq}} \in W^{1,p}(\Omega; \, \mathbb{R}^N_+), \quad v^{\text{eq}} \in W^{2,p}(\Omega; \, \mathbb{R}^3)
\end{align*}
and the vector field $\mu^{\text{eq}} = \partial_{\rho}h(\theta, \, \rho^{\text{eq}})$ satisfies $ \mathcal{P} \,\mu^{\text{eq}} \in W^{2,p}(\Omega; \, \mathbb{R}^N)$. For these functions, the relations
\begin{alignat}{2}
\label{massstat} \divv( \rho^{\text{eq}}_i \, v^{\text{eq}} - \sum_{j=1}^N M_{i,j}(\rho^{\text{eq}}) \, (\nabla \mu_j^{\text{eq}} - b^j(x)) & = 0 & & \text{ for } i = 1,\ldots,N \, ,\\
\label{momentumsstat} \divv( \varrho^{\text{eq}} \, v^{\text{eq}}\otimes v^{\text{eq}} - \mathbb{S}(\nabla v^{\text{eq}})) + \nabla p^{\text{eq}} & = \sum_{i=1}^N \rho_i^{\text{eq}} \, b^i(x) & &
\end{alignat}
are valid in $\Omega$. Here we let $p^{\text{eq}} = -h(\theta, \, \rho^{\text{eq}}) + \sum_{i=1}^N \rho_i^{\text{eq}} \, \mu_i^{\text{eq}}$. The boundary conditions are $v^{\text{eq}} = 0$ on $\partial \Omega$ and $\nu(x) \cdot M_{i,j}(\rho^{\text{eq}}) \, (\nabla \mu_j^{\text{eq}} - b^j(x)) = 0$ on $\partial \Omega$. We show that the problem \eqref{mass}, \eqref{momentum} possesses a unique strong solution on arbitrary large, but finite time intervals, given that:
\begin{enumerate}[(i)]
\item The equilibrium solution and the initial data are sufficiently smooth;
\item The distance of the initial data to an equilibrium solution is sufficiently small.
\end{enumerate}
\begin{theo}\label{MAIN3}
We adopt the assumptions of Theorem \ref{MAIN}, but also assume that $b = b(x)$ does not depend on time with $b \in W^{1,p}(\Omega; \, \mathbb{R}^{N\times 3})$ and $r \equiv 0$. In addition, we assume that an equilibrium solution $(\rho^{\text{eq}}, \, v^{\text{eq}}) \in W^{1,p}(\Omega; \, \mathbb{R}^N_+) \times W^{2,p}(\Omega; \, \mathbb{R}^3)$ is given and that the data possess the additional regularity
\begin{align*}
\varrho^{\text{eq}}, \, \varrho^0 \in W^{2,p}(\Omega), \quad v^{\text{eq}} \in W^{3,p}(\Omega; \, \mathbb{R}^3), \, v^0 \in W^{2,p}(\Omega; \, \mathbb{R}^3) \, .
\end{align*}
Then, for every $0 < T < + \infty$, there exists $R_1 > 0$, depending on $T$ and on the respective norms of the data, such that if
\begin{align*}
\|\mathcal{P}\, (\mu^0 - \mu^{\text{eq}})\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^N)} + \|\varrho^0 - \varrho^{\text{eq}}\|_{W^{1,p}(\Omega)} + \|v^0 - v^{\text{eq}}\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^3)} \leq R_1
\end{align*}
the problem \eqref{mass}, \eqref{momentum} with closure relations \eqref{DIFFUSFLUX}, \eqref{CHEMPOT}, \eqref{GIBBSDUHEMEULER} and boundary conditions \eqref{initial0rho}, \eqref{initial0v}, \eqref{lateral0v}, \eqref{lateral0q} possesses a unique solution in $Q_T$ in the same class as in Theorem \ref{MAIN}.
\end{theo}
\begin{rem}
\begin{itemize}
\item One particular stability issue for the compressible Navier-Stokes equations and for the Navier-Stokes-Fourier system is well studied in the Hilbert space setting in \cite{matsunishi,valli82,valli83,vallizaja}. It is proved there that some 'equilibrium solution' $v^{\text{eq}} = 0$ and $\varrho^{\text{eq}} = \text{const.}$ (and, in the case of Navier-Stokes-Fourier, $\theta^{\text{eq}} = \text{const.}$) is globally stable. For initial data sufficiently close to this solution, there indeed exists a global strong solution ($T = + \infty$). Extensions of this result to the stability of other stationary solutions are, to the best of our knowledge, not available.
\item Robustness estimates on bounded time intervals are to be found in Theorem 1 of \cite{bellafei}, as well as more recent references in the stability discussion.
\item The additional regularity required in Theorem \ref{MAIN3} for the data is sufficient, but might be not optimal. Since we are interested in a qualitative result we do not attempt to formulate minimal assumptions in this place.
\end{itemize}
\end{rem}
\subsection{Organisation of the paper}
Section \ref{changevariables} explains the change of variables in the transport problem, which is the core of our method. The closely related Section \ref{reformulation} reformulates the partial differential equations and the main results in these new variables.
In Section \ref{technique1} we introduce the differential operators to be investigated in the analysis and the Banach spaces in which they are defined. We state the $C^1-$property for these operators and discuss diverse technicalities such as trace properties and the extension of the boundary data.
Section \ref{twomaps} introduces two methods of linearising in order to reformulate the operator equation as a fixed-point problem. The first method freezes both the coefficients and the lower-order terms, and is applied to prove the short-time existence. The second method is somewhat more demanding and relies on linearising the entire lower-order part of the operators around a suitable extension of the initial data. This second method allows to prove stability estimates and is used for the global existence result.
The technical part is occupied by the remaining sections. Section \ref{ESTI} states the main continuity estimates for the inverse of the principal part of the linearised operators.
In Section \ref{contiT}, we apply these estimates to show the controlled growth of the solution and the state space estimates for the short-time existence and uniqueness. We prove the convergence of the fixed-point iteration in Section \ref{FixedPointIter}.
For the global existence, we prove the main estimate in the section \ref{contiT1}, and the existence of a fixed-point in Section \ref{FixedPointT1}.
\section{A change of variables to tackle the analysis}\label{changevariables}
The system \eqref{mass}, \eqref{momentum} exhibits several features that might restrain the global well-posedness: 1. The diffusion system is coupled at the highest order; 2. This system possesses mixed parabolic--hyperbolic character, with possibly degenerated parabolicity; 3. The mass densities are subject to positivity constraints. We show in the present paper that there is a reformulation of the problem allowing to eliminate the positivity constraints on $\rho$ and to handle the singularity due to $M \, 1^N = 0$.
A first main idea is to use the chemical potentials as principal variables. With the help of the conjugate convex function $h^*$ to $h$, we invert the relation $\mu_i = \partial_{\rho_i}h(\rho)$ for $i=1,\ldots,N$, which reads
\begin{align}\label{convexanal}
\rho_i = \partial_{\mu_i}h^*(\mu_{1},\ldots,\mu_N) \text{ for } i=1,\ldots,N \, .
\end{align}
If $h$ is of Legendre type on $\mathbb{R}^N_+$ and co-finite, then $h^*$ is strictly convex and smooth on $\mathbb{R}^N$. Thus, the natural domain of $\mu$ is not subject to constraints. The idea to pass to dual variables to avoid the positivity constraints in multicomponent transport problems is not new. It was probably introduced first in the context of the weak solution analysis of semiconductor equations (see a. o. \cite{gagroe}). The method has been generalised in the context of a \emph{boundedness by entropy method}: See, among others, \cite{juengel15,juengel17} to allow the weak solution analysis of full rank parabolic systems.
In the context of Fick-Onsager or equivalent closure equations for the diffusion fluxes, the PDE system exhibits a rank $N-1$ parabolicity. In the literature, this parabolicity could be exploited by imposing an incompressibility condition which allows to eliminate one variable: See \cite{chenjuengel}, \cite{herbergpruess}, \cite{bothepruess} for this approach. In these cases the free energy is positively homogeneous, and the thermodynamic pressure resulting from \eqref{GIBBSDUHEMEULER} is constant.
In the paper \cite{dredrugagu16}, we first proposed to combine the inversion formula \eqref{convexanal} with a linear transformation in order to eliminate the positivity constraints and to exploit the rank $N-1$ parabolicity, without imposing restriction on the pressure of the physical system. Note that already in the papers \cite{bothedreyer,dreyerguhlkemueller} devoted mainly to modelling, the diffusion problem is partly formulated in variables $\varrho$ (total mass density) and $\mu_1-\mu_N, \ldots,\mu_{N-1}-\mu_N$ (differences of chemical potentials). These models single out one particular species, introducing some asymmetry. For the theoretical investigation we shall therefore rather follow \cite{dredrugagu16} where the choice of the projector is left open.
We choose a basis $\xi^1,\ldots,\xi^{N-1}, \, \xi^N$ of $\mathbb{R}^N$ such that $\xi^N = 1^N$, and introduce the uniquely determined $\eta^1,\ldots,\eta^N \in \mathbb{R}^N$ such that $\xi^i \cdot \eta^j = \delta^{i}_{j}$ for $i,j = 1,\ldots,N$ (dual basis). We define
\begin{align*}
q_{\ell} := \eta^{\ell} \cdot \mu := \sum_{i=1}^N \eta^{\ell}_i \, \mu_i \text{ for } \ell = 1,\ldots,N-1 \, .
\end{align*}
We call $q_1,\ldots,q_{N-1}$ the \emph{relative chemical potentials}. We can now express
\begin{align*}
\varrho = \sum_{i=1}^N \rho_i & = 1^{N} \cdot \nabla_{\mu}h^*(\mu_1,\ldots,\mu_N) = \sum_{i=1}^N \partial_{\mu_i} h^*(\mu_1,\ldots,\mu_N) \\
& = 1^N \cdot \nabla_{\mu}h^*(\sum_{\ell = 1}^{N-1} q_{\ell} \, \xi^{\ell} + (\mu \cdot \eta^N) \, 1^N) \, .
\end{align*}
This is an algebraic equation of the form $F(\mu \cdot \eta^N, \, q_1, \ldots, q_{N-1}, \, \varrho) = 0$. We notice that
\begin{align*}
\partial_{\mu \cdot \eta^N} F(\mu \cdot \eta^N, \, q_1, \ldots, q_{N-1}, \, \varrho) = D^{2}h^*(\mu) 1^N \cdot 1^N > 0 \, ,
\end{align*}
due to the strict convexity of the conjugate function. Thus, the latter algebraic equation defines the last component $\mu \cdot \eta^N$ implicitly as a differentiable function of $\varrho$ and $q_1, \ldots, q_{N-1}$. We call this function $\mathscr{M}$ and obtain the equivalent formula
\begin{align}\label{MUAVERAGE}
\mu & = \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1}) \, 1^N \, ,\\
\label{RHONEW}\rho & = \nabla_{\mu}h^*( \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1}) \, 1^N) \, ,
\end{align}
where only the total mass density $\varrho$ and the relative chemical potentials $q_1,\ldots,q_{N-1}$ occur as free variables. Since the pressure obeys the Euler equation \eqref{GIBBSDUHEMEULER}
\begin{align}\label{PRESSUREDEF}
p = h^*(\mu) = & h^*( \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1}) \, 1^N) =: P(\varrho, \, q) \, .
\end{align}
Certain properties of the functions $\mathscr{M}$ and $P$ for general $h = h(\rho)$ have already been studied in the Section 5 of \cite{dredrugagu16}.
Here we need only the following property:
\begin{lemma}\label{pressurelemma}
Suppose that $h \in C^{3}(\mathbb{R}^N_+)$ is a \emph{Legendre} function in $\mathbb{R}^N_+$, and the image of the gradient map $\nabla_{\rho} h$ is the entire $\mathbb{R}^N$. Then, the formula \eqref{PRESSUREDEF} defines a function $P$ which belongs to $C^2(\mathbb{R}_{+} \times \mathbb{R}^{N-1})$.
\end{lemma}
\begin{proof}
Due to the main Theorem 26.5 of \cite{rockafellar} on the Legendre transform, we know that the convex conjugate $h^*$ is differentiable and locally strictly convex on the image $\nabla_{\rho} h(\mathbb{R}^N_+)$ of the gradient mapping. In addition, $\nabla_{\rho} h(\mathbb{R}^N_+) = \text{int}(\text{dom}(h^*))$. By assumption, we thus know that $\text{dom}(h^*) = \mathbb{R}^N$. Since $\nabla h$ and $\nabla h^*$ are inverse to each other, the inverse mapping theorem allows to show that $h^* \in C^3(\mathbb{R}^N)$ if and only if $h \in C^3(\mathbb{R}^N_+)$.
Consider now the function $\mathscr{M}$ introduced in \eqref{MUAVERAGE}. Since it is obtained implicitly from the algebraic relation $1^N \cdot \nabla_{\mu}h^*(\sum_{\ell = 1}^{N-1} q_{\ell} \, \xi^{\ell} + (\mu \cdot \eta^N) \, 1^N) - \varrho = 0$, we obtain for the derivatives the expressions
\begin{align*}
\partial_{\varrho} \mathscr{M}(\varrho, \, q) = \frac{1}{D^2h^*1^N \cdot 1^N}, \quad \partial_{q_k}\mathscr{M}(\varrho, \, q) = -\frac{D^2h^*1^N \cdot \xi^k}{D^2h^*1^N \cdot 1^N} \, ,
\end{align*}
in which the Hessian $D^2h^*$ is evaluated at $\mu = \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1}) \, 1^N$. We thus see that $\mathscr{M}\in C^2(\mathbb{R}_+ \times \mathbb{R}^{N-1})$. Clearly, the formula \eqref{PRESSUREDEF} implies that $P \in C^2(\mathbb{R}_+ \times \mathbb{R}^{N-1})$.
\end{proof}
In order to deal with the right-hand side (external forcing), we also introduce projections for the field $b$. For $\ell = 1,\ldots,N-1$, we define $\tilde{b}^{\ell}(x, \, t) := \sum_{i=1}^N b^i(x, \, t) \, \eta^{\ell}_i$ and $\bar{b}(x, \, t) := \sum_{i=1}^N b^i(x, \, t) \, \eta^{N}_i$ in order to express $$b^i(x, \, t) := \sum_{\ell=1}^N \tilde{b}^{\ell}(x, \, t) \, \xi^{\ell}_i + \bar{b}(x, \, t) \text{ for } i =1,\ldots,N \, .$$
For the reaction term $r: \, R^N_+ \rightarrow \mathbb{R}^N$, $\rho \mapsto r(\rho)$, we define
\begin{align*}
\tilde{r}_k(\varrho, \, q) := \sum_{i=1}^N \xi^k_i \, r_i( \sum_{k=1}^{N-1} R_k(\varrho, \, q) \, \eta^k + \varrho \, \eta^N) \, \text{ for } k = 1,\ldots,N-1 \, .
\end{align*}
\section{Reformulation of the partial differential equations and of the main theorem}\label{reformulation}
We recall \eqref{CONSTRAINT} and we see that the diffusion fluxes have the form
\begin{align*}
& J^i = - \sum_{j=1}^N M_{i,j}(\rho_1,\ldots,\rho_N) \, (\nabla \mu_j - b^j(x, \, t))\\
&= -\sum_{j=1}^N\left[ \sum_{\ell = 1}^{N-1} M_{i,j}(\rho_1,\ldots,\rho_N) \, \xi^{\ell}_j \, (\nabla q_{\ell} - \tilde{b}^{\ell}) - M_{i,j}(\rho_1,\ldots,\rho_N) \, (\nabla \mathscr{M}(\varrho, \, q) - \bar{b}(x, \, t))\right] \\
& \quad = - \sum_{\ell = 1}^{N-1} \left[\sum_{j=1}^N M_{i,j}(\rho_1,\ldots,\rho_N) \, \xi^{\ell}_j\right] \, (\nabla q_{\ell}- \tilde{b}^{\ell}) \, .
\end{align*}
If we introduce the rectangular projection matrix $\mathcal{Q}_{j,\ell} = \xi^{\ell}_j$ for $\ell = 1,\ldots,N-1$ and $j = 1,\ldots,N$, then $J = - M \, \mathcal{Q} (\nabla q-\tilde{b})$. Thus, we consider equivalently
\begin{alignat*}{2}
\partial_t \rho + \divv( \rho \, v - M \, \mathcal{Q}\, (\nabla q-\tilde{b}(x, \,t))) & = r \, , & & \\
\partial_t (\varrho \, v) + \divv( \varrho \, v\otimes v - \mathbb{S}(\nabla v)) + \nabla P(\varrho, \, q) & = \sum_{i=1}^N \rho_i \, b^i(x, \, t) & & \, .
\end{alignat*}
Next we define, for $k = 1,\ldots, N-1$, the maps
\begin{align}\label{RHONEWPROJ}
R_{k}(\varrho, \, q) & := \sum_{j=1}^N \xi^{k}_j \, \rho_j = \sum_{j=1}^N \xi^{k}_j \, \partial_{\mu_j}h^*( \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1}) \, 1^N )\, .
\end{align}
Obviously we can express $\rho_i := \sum_{k=1}^{N-1} R_k(\varrho, \, q) \, \eta^k_i + \varrho \, \eta^N_i$.
We note a particular property of the vector field $R$.
\begin{lemma}\label{rhonewlemma}
Suppose that $h \in C^{3}(\mathbb{R}^N_+)$ is a co-finite Legendre function on $\mathbb{R}^N_+$. Then, the formula \eqref{RHONEWPROJ} defines $R$ as a vector field of class $C([0, \, +\infty[ \times \mathbb{R}^{N-1}; \, \mathbb{R}^{N-1})$ and $C^2(\mathbb{R}_{+} \times \mathbb{R}^{N-1}; \, \mathbb{R}^{N-1})$. The Jacobian $\{R_{k,q_j}\}_{k,j=1,\ldots,N-1}$ is symmetric and positively definite at every $(\varrho, \, q) \in \mathbb{R}_{+} \times \mathbb{R}^{N-1}$ and
\begin{align*}
R_{q}(\varrho, \, q) = \mathcal{Q}^T \, D^{2}h^* \, \mathcal{Q} - \frac{\mathcal{Q}^T \, D^2h^* 1^N \otimes \mathcal{Q}^T \, D^2h^* 1^N}{D^2h^* 1^N \cdot 1^N} \, .
\end{align*}
In this formula, the Hessian $D^2h^*$ is evaluated at $\mu = \sum_{\ell=1}^{N-1} q_{\ell} \, \xi^{\ell} + \mathscr{M}(\varrho, \, q_1,\ldots,q_{N-1})$.
\end{lemma}
The proof is direct, using Corollary 5.3 of \cite{dredrugagu16}. Multiplying the mass transport equations with $\xi^{k}_i$, we obtain that
\begin{align*}
\partial_t R_k(\varrho, \, q) +\divv(R_k(\varrho, \, q) \, v - \underbrace{(\mathcal{Q}^T \, M(\rho) \, \mathcal{Q})_{k,\ell}}_{=:\widetilde{M}_{k,\ell}(\rho)}\, (\nabla q_{\ell} - \tilde{b}^{\ell}) = (\mathcal{Q}^T \, r)_k \text{ for } k=1,\ldots,N-1 \, .
\end{align*}
It turns out that if the rank of $M(\rho)$ is $N-1$ on all states $\rho \in \mathbb{R}^{N}_+$, the matrix $\widetilde{M}(\rho)$ is symmetric and strictly positively definite on all states $\rho \in \mathbb{R}^{N}_+$. Making use of \eqref{MUAVERAGE}, \eqref{RHONEW}, we can also consider $\widetilde{M}$ as a mapping of the variables $\varrho$ and $q$. Using Lemma \ref{rhonewlemma}, we can establish the following properties of this map.
\begin{lemma}\label{Mnewlemma}
Suppose that $h \in C^{3}(\mathbb{R}^N_+)$ is a co-finite Legendre function on $\mathbb{R}^N_+$. Suppose further that $M: \, \mathbb{R}^N_+ \rightarrow \mathbb{R}^{N\times N}$ is a mapping into the positively semi-definite matrices of rank $N-1$ with kernel $\{1^N\}$, having entries $M_{i,j}$ of class $C^{2}(\mathbb{R}^N_+) \cap C(\overline{\mathbb{R}}^N_{+})$. Then the formula $ \widetilde{M}(\varrho, \, q) := \mathcal{Q}^T \, M(\rho) \, \mathcal{Q}$ defines a map $\widetilde{M}: \, \mathbb{R}_+ \times \mathbb{R}^{N-1} \rightarrow \mathbb{R}^{(N-1)\times (N-1)}$ into the symmetric positively definite matrices. The entries $\widetilde{M}_{k,j}$ are functions of class $C^{2}(]0, \, +\infty[ \times \mathbb{R}^{N-1})$ and $C([0, \, +\infty[\times \mathbb{R}^{N-1})$.
\end{lemma}
Overall, we get for the variables $(\varrho, \, q_1, \ldots, q_{N-1}, \, v)$ instead of \eqref{mass}, \eqref{momentum} the equivalent equations
\begin{alignat}{2}
\label{mass2} \partial_t R(\varrho, \, q) + \divv( R(\varrho, \, q) \, v - \widetilde{M}(\varrho, \, q) \, (\nabla q - \tilde{b}(x, \, t)) ) & = \tilde{r}(\varrho, \, q) \, ,& & \\
\label{mass2tot}\partial_t \varrho + \divv(\varrho \, v) & = 0 \, ,& & \\
\label{momentum2} \partial_t (\varrho \, v) + \divv( \varrho \, v\otimes v - \mathbb{S}(\nabla v)) + \nabla P(\varrho, \, q) & = R(\varrho, \, q) \cdot \tilde{b}(x, \, t) + \varrho \, \bar{b}(x, \, t) & & \, .
\end{alignat}
Up to the positivity constraint on the total mass density $\varrho$, the latter problem is free of constraints!
Our first aim is now to show that at least locally--in--time the system \eqref{mass2}, \eqref{mass2tot}, \eqref{momentum2} for the variables $(\varrho, \, q_1, \ldots, q_{N-1}, \, v)$ is well--posed. We consider initial conditions
\begin{alignat}{2}\label{initialq}
q(x, \, 0) & = q_0(x) & & \text{ for } x \in \Omega\, ,\\
\label{initialrho}\varrho(x, \, 0) & = \varrho_0(x) & & \text{ for } x \in \Omega \, ,\\
\label{initialv} v(x, \, 0) & = v_0(x) & & \text{ for } x \in \Omega \, .
\end{alignat}
Due to the preliminary considerations in Section \ref{changevariables}, prescribing these variables is completely equivalent to prescribing initial values for the mass densities $\rho_i$ and the velocity $v$. It suffices to define $\mu^0 = \partial_{\rho}h(\rho^0)$ and then $q^0_k = \mu^0\cdot \eta^k$ for $k=1,\ldots,N-1$. For simplicity, we consider the linear homogeneous boundary conditions
\begin{alignat}{2}
\label{lateralv} v & = 0 & &\text{ on } S_T \, ,\\
\label{lateralq} \nu \cdot \nabla q_{k} & = 0 & &\text{ on } S_T \text{ for } k = 1,\ldots,N-1 \, .
\end{alignat}
The conditions \eqref{lateralq} and \eqref{lateral0q} are equivalent, because we assume throughout the paper that the given forcing $b$ satisfies $\nu(x) \cdot \mathcal{P} \,b(x, \, t) = 0$ for $x \in \partial \Omega$ (see the assumption \eqref{force} in the statement of Theorem \ref{MAIN}). We can also do without this assumption, but at the price of further technical complications -- to be avoided here -- due to the need of conceptualising also surface source terms. Owing to the Lemmas \ref{pressurelemma}, \ref{rhonewlemma} and \ref{Mnewlemma}, the coefficient functions $R$, $\widetilde{M}$ and $P$ are of class $C^2$ in the domain of definitions $\mathbb{R}_+ \times \mathbb{R}^{N-1}$. The set $\Omega$ is assumed smooth likewise (further precisions in the statement of the theorem). We reformulate Theorem \ref{MAIN} for the new variables.
\begin{theo}\label{MAIN2}
Assume that the coefficient functions $R$, $\widetilde{M}$ and $P$ are of class $C^2$, while $\tilde{r}$ is of class $C^1$, in the domain of definition $\mathbb{R}_+ \times \mathbb{R}^{N-1}$. Let $\Omega$ be a bounded domain with boundary $\partial \Omega$ of class $\mathcal{C}^2$. Suppose that, for some $p > 3$, the initial data are of class
\begin{align*}
q^0 \in W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1}), \, \varrho_0 \in W^{1,p}(\Omega; \, \mathbb{R}_+), \, v^0 \in W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{3}) \, ,
\end{align*}
satisfying $\varrho^0(x) \geq m_0 > 0$ in $\Omega$ and the compatibility conditions $\nu(x) \cdot \nabla q^0(x) = 0$ and $v^0(x) = 0$ on $\partial \Omega$.
Assume that $\tilde{b} \in W^{1,0}_p(Q_T; \, \mathbb{R}^{(N-1)\times 3})$ and $\bar{b} \in L^p(Q_T; \,\mathbb{R}^3)$.
Then there is $0< T^* \leq T$, depending only of these data in the norms just specified, such that the problem \eqref{mass2}, \eqref{mass2tot}, \eqref{momentum2} with boundary conditions \eqref{initialq}, \eqref{initialrho}, \eqref{initialv}, \eqref{lateralv} and \eqref{lateralq} is uniquely solvable in the class
\begin{align*}
(q, \, \varrho,\,v) \in W^{2,1}_p(Q_{T^*}; \, \mathbb{R}^{N-1}) \times W^{1,1}_{p,\infty}(Q_{T^*}; \, \mathbb{R}_+) \times W^{2,1}_p(Q_{T^*}; \, \mathbb{R}^{3}) \, .
\end{align*}
The solution can be uniquely extended in this class to a larger time interval whenever there is $\alpha > 0$ such that $$\|q\|_{C^{\alpha,\frac{\alpha}{2}}(Q_{T^*})} + \|\nabla q\|_{L^{\infty,p}(Q_{T^*})} + \|v\|_{L^{z \, p,p}(Q_{T^*})} + \int_{0}^{T^*} [\nabla v(s)]_{C^{\alpha}(\Omega)} \, ds < + \infty \, ,$$ where $z = z(p)$ is the number defined in Theorem \ref{MAIN}.
\end{theo}
\section{Technicalities}\label{technique1}
\subsection{Operator equation}
For functions $q_1,\ldots, q_{N-1}$, $v_1, \, v_2, \, v_3$ and for non-negative $\varrho$ defined on $\overline{\Omega} \times [0, \, T]$, we define
$\mathscr{A}(q, \, \varrho, \, v) = (\mathscr{A}^1(q, \, \varrho, \, v) , \, \mathscr{A}^2(\varrho, \, v) , \, \mathscr{A}^3(q, \, \varrho, \, v) )$, where
\begin{align*}
\mathscr{A}^1(q, \, \varrho, \, v) & := \partial_t R(\varrho, \, q) + \divv( R(\varrho, \, q) \, v - \widetilde{M}(\varrho, \, q) \, (\nabla q - \tilde{b}(x, \, t)) ) - \tilde{r}(\varrho, \, q) \\
\mathscr{A}^2(\varrho, \, v) & := \partial_t \varrho + \divv(\varrho \, v)\\
\mathscr{A}^3(q, \, \varrho, \, v) & := \varrho \, (\partial_t v + (v\cdot\nabla) v) - \divv \mathbb{S}(\nabla v) + \nabla P(\varrho, \, q) - R(\varrho, \, q) \cdot \tilde{b}(x, \,t) - \varrho \, \bar{b}(x, \,t) \, .
\end{align*}
We shall moreover introduce another related operator. This trick allows to deal with the time derivative of $\varrho$ occurring in $\mathscr{A}^1$, which is a coupling in the highest order. Consider a solution $u = (q, \, \varrho, \, v)$ to $\mathscr{A}(u) = 0$. Computing time derivatives in the equation $\mathscr{A}^1(u) = 0$, we obtain that
\begin{align*}
& R_{\varrho} \, (\partial_t \varrho + v \cdot \nabla \varrho) + \sum_{j=1}^{N-1} R_{q_j} \, (\partial_t q_j + v \cdot \nabla q_j) + R \, \divv v- \divv (\widetilde{M} \, \nabla q) \\
& \quad = - \divv(\widetilde{M} \, \tilde{b}(x, \, t)) + \tilde{r} \, .
\end{align*}
Here, all non-linear functions $R, \, R_{\varrho}, \, R_{q}$ and $\widetilde{M}$, $\tilde{r}$ etc. are evaluated at $(\varrho, \, q)$. We next exploit $\mathscr{A}^2(\varrho, \, v) = 0$ to see that $\partial_t \varrho + v \cdot \nabla \varrho = - \varrho \, \divv v$. Thus, under the side-condition $\mathscr{A}^2(\varrho, \, v) = 0$, the equation $\mathscr{A}^1(u) = 0$ is equivalent to
\begin{align}\begin{split}\label{A1equiv}
& R_{q}(\varrho, \, q) \, \partial_t q - \divv (\widetilde{M}(\varrho, \, q) \, \nabla q) \\
& \quad = (R_{\varrho}(\varrho, \, q) \, \varrho - R(\varrho, \, q)) \, \divv v - R_q(\varrho, \, q) \, v \cdot \nabla q - \divv(\widetilde{M} \, \tilde{b}(x, \, t))+ \tilde{r}(\varrho, \, q) \, .
\end{split}
\end{align}
We therefore can introduce $\widetilde{\mathscr{A}}(q, \, \varrho, \, v) := (\widetilde{\mathscr{A}}^1(q, \, \varrho, \, v) , \, \mathscr{A}^2(\varrho, \, v) , \, \mathscr{A}^3(q, \, \varrho, \, v) )$, the first component being the differential operator defined by \eqref{A1equiv}. Clearly, $\mathscr{A}(u) = 0$ if and only if $\widetilde{\mathscr{A}}(u) = 0$.
\subsection{Functional setting}
We now introduce a functional setting for which the short--time well--posedness can be proved by relatively elementary means. We essentially follow the parabolic setting of the book \cite{ladu}, which relies on the former study \cite{sologeneral}. We use the standard Sobolev spaces $W^{m,p}(\Omega)$ for $m \in \mathbb{N}$ and $1\leq p\leq +\infty$, the Sobolev-Slobodecki spaces $W^s_p(\Omega)$ for $s >0$ non-integer and, with a further index $1 \leq r \leq +\infty$, the parabolic Lebesgue spaces $L^{p,r}(Q)$ (space index first; $L^p(Q) = L^{p,p}(Q)$).
First, we consider the setting for the parabolic variables $v$ and $q$. For $\ell = 1,2, \ldots$, the Banach-spaces $W^{2\ell, \ell}_p(Q)$ are defined in Section \ref{mathint}.
For $\ell = 1$, the space $W^{2,1}_p(Q)$ denotes the usual space $ W^1_p(0,T; \, L^p(\Omega)) \cap L^p(0,T; \, W^{2,p}(\Omega))$ of maximal parabolic regularity of index $p$. Moreover, we let $W^{\ell,0}_p(Q_T) := \{u \in L^p(Q) \, : \, D_x^{\alpha} u \in L^p(Q) \, \forall\,\, |\alpha| \leq \ell \}$. We denote $C(\overline{Q}) = C^{0,0}(\overline{Q})$ the space of continuous functions over $\overline{Q}$ and, for $\alpha, \, \beta \in [0, \, 1]$, we define the spaces of H\"older continuous functions via
\begin{align*}
C^{\alpha, \, \beta}(\overline{Q}) := & \{ u \in C(\overline{Q}) \, : \, [ u ]_{C^{\alpha,\beta}(\overline{Q})} < + \infty\} \, ,\\
[u]_{ C^{\alpha, \, \beta}(\overline{Q})} = & \sup_{t \in [0, \, T], \, x,y \in \Omega} \frac{|u(t, \, x) - u(t, \, y)|}{|x-y|^{\alpha}} + \sup_{x \in \Omega, \, t,s \in [0,\, T]} \frac{|u(t, \, x) - u(s, \, x)|}{|t-s|^{\beta}} \, .
\end{align*}
\begin{rem}[\textbf{Useful properties of $W^{2,1}_p(Q)$}:]\label{parabolicspace}
\begin{itemize}
\item The spatial differentiation is continuous from $W^{2,1}_p$ into $W^{1,0}_p$, and into $C([0, \, T]; \, W^{1-\frac{2}{p}}_p(\Omega))$;
\item The spatial differentiation is continuous from $W^{2,1}_p$ into $L^{\infty,2p-3}(Q)$, into $L^{z_1 \, p,\infty}(Q)$ and into $L^s(Q)$ for $s = 2p-3 + z_1 \, p$. Here $z_1 = z_1(p) := \frac{3}{5-p}$ for $3 < p < 5$, $z_1 \in ]1, \, \infty[$ arbitrary for $p = 5$, and $z_1 := + \infty$ for $p>5$;
\item For $k \in \mathbb{N}$ and $\alpha \in [0, \, 1]$ such that $k+\alpha \leq 2-\frac{5}{p}$, the space $W^{2,1}_p$ embeds continuously into the H\"older space $C^{k+\alpha,\, 0}(\overline{Q})$, and its elements are bounded;
\item The time differentiation is continuous from $W^{2,1}_p$ into $L^p$;
\end{itemize}
\end{rem}
\begin{proof}
The embedding $W^{2,1}_p(Q_T) \subset C([0, \, T]; \, W^{2-\frac{2}{p}}_p(\Omega))$ is known from the references \cite{sologeneral}, \cite{denkhieberpruess} and several others. Thus, $\frac{d}{dx}$ is a linear continuous operator from $W^{2,1}_p(Q)$ into $C([0, \, T]; \, W^{1-\frac{2}{p}}_p(\Omega))$. With the Sobolev embedding theorem (e. g., 8.3.3 in \cite{kuf} or XI.2.1 in \cite{visi}), we know that $W^{1-\frac{2}{p}}_p(\Omega) \subset L^{\frac{3p}{(5-p)^+}}(\Omega)$. Thus $\frac{d}{dx}$ is continuous into $C([0,T]; \, L^{\frac{3p}{(5-p)^+}}(\Omega))$. For $\alpha := \frac{p}{2p-3}$, the interpolation inequality (see \cite{nirenberginterpo}, Theorem 1)
\begin{align*}
\|\nabla f\|_{L^{\infty}(\Omega)} \leq & C_1 \, \|D^2f\|_{L^p(\Omega)}^{\alpha} \, \|f\|_{L^{\infty}(\Omega)}^{1-\alpha} + C_2 \, \|f\|_{L^{\infty}(\Omega)} \,
\end{align*}
implies that
\begin{align*}
\|\nabla u\|_{L^{\infty,2p-3}(Q_T)}^{2p-3} \leq 2^{2p-3} \, (C_1^{2p-3} \, \|D^2u\|_{L^p(Q_T)}^{p} \, \|u\|^{p-3}_{L^{\infty}(Q_T)} + C_2^{2p-3} \, \|u\|_{L^{\infty,2p-3}(Q_T)}^{2p-3}) \, .
\end{align*}
Thus $\nabla u \in L^{\infty,2p-3}(Q)$. The continuity of $\frac{d}{dx}$ into $W^{1,0}_p$ is obvious.
For $k \in \mathbb{N}$ and $\alpha \in [0, \, 1]$ such that $k+\alpha \leq 2-\frac{5}{p}$, the space $W^{2 - \frac{2}{p}}_p(\Omega)$ embeds continuously into the H\"older space $C^{k+\alpha}(\overline{\Omega})$ (see \cite{visi}, XI.2.1). Thus $W^{2,1}_p(Q)$ embeds continuously into the H\"older space $C^{k+\alpha,\, 0}(\overline{Q})$.
\end{proof}
Next, we consider the appropriate functional space setting for the continuity equation. Since this equation has another type, some asymmetry cannot be avoided. We introduce the space
\begin{align*}
W^{1,1}_{p,\infty}(Q) & := \{u \in L^{p,\infty}(Q) \, : \, u_t, \, u_{x_i} \in C([0,T]; \, L^{p}(\Omega)) \text{ for } i = 1,2,3\} \, ,\\
\|u\|_{W^{1,1}_{p,\infty}(Q)} & := \|u\|_{L^{p,\infty}(Q)} + \|u_x\|_{L^{p,\infty}(Q)} + \|u_t\|_{L^{p,\infty}(Q)} \, .
\end{align*}
\begin{rem}[\textbf{Properties of $W^{1,1}_{p,\infty}(Q)$}]\label{contispace}
\begin{itemize}
\item The space $W^{1,1}_{p,\infty}$ embeds continuously into the isotropic H\"older space $C^{1-\frac{3}{p}}(\overline{Q})$, and its elements are bounded;
\item The spatial differentiation is continuous from $W^{1,1}_{p,\infty}$ into $L^{p,\infty}(Q)$;
\item The time differentiation is continuous from $W^{1,1}_{p,\infty}$ into $L^{p,\infty}(Q)$;
\end{itemize}
\end{rem}
\begin{proof}
While the two last properties are obvious, we can deduce the first one from the anisotropic embedding result in the appendix of \cite{krejcipanizzi}.
\end{proof}
Beside the diverse Sobolev embedding results, we shall use for $p > 3$ the interpolation inequality (see \cite{nirenberginterpo}, Theorem 1)
\begin{align}
\label{gagliardo}\|\nabla f\|_{L^{\infty}(\Omega)} \leq & C_1 \, \|D^2f\|_{L^p(\Omega)}^{\alpha} \, \|f\|_{L^p(\Omega)}^{1-\alpha} + C_2 \, \|f\|_{L^p(\Omega)}
\end{align}
valid with $\alpha := \frac{1}{2}+\frac{3}{2p}$ for any function $f$ in $W^{2,p}(\Omega)$.
We consider the operator $(q, \, \varrho,\, v) \mapsto \mathscr{A}(q, \, \varrho,\, v)$ as acting in the product space
\begin{align}\label{STATESPACE}
\mathcal{X}_T := W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}) \times W^{1,1}_{p,\infty}(Q_T) \times W^{2,1}_p(Q_T; \, \mathbb{R}^3) \, .
\end{align}
Since the coefficients of $\mathscr{A}$ are defined only for positive $\varrho$, the domain of the operator is contained in the subset of strictly positive second argument
\begin{align}\label{STATESPACEPOS}
\mathcal{X}_{T,+} := W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}) \times W^{1,1}_{p,\infty}(Q_T; \, \mathbb{R}_+) \times W^{2,1}_p(Q_T; \, \mathbb{R}^3) \, .
\end{align}
Since $\mathscr{A}$ is a certain composition of differentiation, multiplication and Nemicki operators, the properties above allow to show the following statement
\begin{lemma}\label{IMAGESPACE}
It the coefficients $R$, $\widetilde{M}$ and $P$ are continuously differentiable in their domain of definition $\mathbb{R}_+ \times \mathbb{R}^{N-1}$, the operator $\mathscr{A}$ is continuous and bounded from $\mathcal{X}_{T,+}$ into
\begin{align*}
\mathcal{Z}_T = L^p(Q_T; \, \mathbb{R}^{N-1}) \times L^{p,\infty}(Q_T) \times L^p(Q_T; \, \mathbb{R}^{3}) \, .
\end{align*}
It the coefficients $R$, $\widetilde{M}$ and $P$ are twice continuously differentiable in their domain of definition $\mathbb{R}_+ \times \mathbb{R}^{N-1}$, the operator $\mathscr{A}$ is continuously differentiable at every point of $\mathcal{X}_{T,+}$.
\end{lemma}
The same holds for the operator $\widetilde{\mathscr{A}}$. The proof of Lemma \ref{IMAGESPACE} can be carried over using standard differential calculus and the properties stated in the Remarks \ref{parabolicspace} and \ref{contispace}. In order to save room, we abstain from presenting it. The same estimates are needed in the proof of the main theorems anyway, and shall be exposed there. We shall moreover make use of a reduced state space, containing only the parabolic components $(q, \, v)$, namely
\begin{align}\label{parabolicSTATESPACE}
\mathcal{Y}_T := W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}) \times W^{2,1}_p(Q_T; \, \mathbb{R}^3) \, .
\end{align}
\textbf{Some short remarks on notation:} 1. We shall \emph{never} employ local H\"older continuous functions. For the sake of notation we identify $C^{\alpha, \, \beta}(Q)$ with $C^{\alpha, \, \beta}(\overline{Q})$; 2. Whenever confusion is impossible, we shall also employ for a function $f$ of the variables $x \in \Omega$ and $t \geq 0$ the notations $f_x = \nabla f$ for the spatial gradient, and $f_t$ for the time derivative; 3. For the coefficients $R$, $\widetilde{M}$, etc. which are functions of $\varrho$ and $q$, the derivatives are denoted $R_{\varrho}$, $\widetilde{M}_q$ etc.
\subsubsection{Boundary conditions and traces}
As before, we let $S_T = \partial \Omega \times ]0, \, T[$. As is well known, there is a well-defined trace operator $\text{tr}_{S_T} \in \mathscr{L}(W^{1,0}_p(Q), \, L^p(S_T))$ (even continuous with values in $L^p(0,T; \, W^{1-\frac{1}{p}}_p(\partial \Omega))$). Since $W^{2\ell,\ell}_p(Q) \subset W^{1,0}_p(Q)$ for $\ell \geq 1$, we can meaningfully define a Banach space
\begin{align*}
\text{Tr}_{S_T}\, W^{2\ell,\ell}_p(Q) :=& \{ f \in L^p(S_T) \, : \, \exists \bar{f} \in W^{2\ell,\ell}_p(Q), \, \text{tr}_{S_T}(\bar{f}) = f\} \, ,\\
\|f\|_{\text{Tr}_{S_T} W^{2\ell,\ell}_p(Q)} :=& \inf_{\bar{f} \in W^{2\ell,\ell}_p(Q), \, \text{tr}_{S_T}(\bar{f}) = f} \, \|\bar{f}\|_{W^{2\ell,\ell}_p(Q)} \,.
\end{align*}
These spaces have been exactly characterised in terms of anisotropic fractional Sobolev spaces on the manifold $S_T$. The topic is highly technical. In particular, it is known that $\text{Tr}_{S_T}\, W^{2,1}_p(Q) = W^{2-\frac{1}{p}, \, 1-\frac{1}{2p}}(S_T)$: See \cite{denkhieberpruess}, while older references \cite{ladu}, \cite{sologeneral} seem to show only the inclusion $\text{Tr}_{S_T}\, W^{2,1}_p(Q) \subseteq W^{2-\frac{1}{p}, \, 1-\frac{1}{2p}}(S_T)$.
Next, we consider the conditions on the surface $\Omega \times \{0\}$, i. e. the initial conditions. There is a well defined trace operator $\text{tr}_{\Omega \times \{0\}} \in \mathscr{L}(C([0,T]; \, L^p(\Omega)), \, L^p(\Omega))$. Note that $W^{2\ell,\ell}_p(Q) \subset C([0,T]; \, L^p(\Omega))$ for $\ell \geq 1$. Thus, we can define similarly
\begin{align*}
\text{Tr}_{\Omega \times \{0\}} W^{2\ell,\ell}_p(Q) := & \{ f \in L^p(\Omega) \, : \, \exists \bar{f} \in W^{2\ell,\ell}_p(Q), \, \text{tr}_{\Omega \times \{0\}}(\bar{f}) = f\} \, ,\\
\|f\|_{\text{Tr}_{\Omega \times \{0\}} W^{2\ell,\ell}_p(Q)} := & \inf_{\bar{f} \in W^{2\ell,\ell}_p(Q), \, \text{tr}_{\Omega \times \{0\}}(\bar{f}) = f} \, \|\bar{f}\|_{W^{2\ell,\ell}_p(Q)} \,.
\end{align*}
It is known that $\text{Tr}_{\Omega \times \{0\}} W^{2,1}_p(Q) = W^{2-\frac{2}{p}}_p(\Omega)$, see \cite{sologeneral}, or \cite{denkhieberpruess} and references for a complete characterisation using the Besov spaces. Here we can restrict to the Slobodecki space since $2-\frac{2}{p}$ is necessarily non-integer for $p > 3$. The spaces of zero initial conditions are defined via
\begin{align*}
\phantom{}_0W^{2,1}_p(Q_T) & := \{u \in W^{2,1}_p(Q_T) \, : \, u(0) = 0\}\, , \\
\phantom{}_0W^{1,1}_{p,\infty}(Q_T) & := \{u \in W^{1,1}_{p,\infty}(Q_T) \, : \, u(0) = 0\} \, ,\\
\phantom{}_0\mathcal{X}_T & := \phantom{}_0W^{2,1}_p(Q_T;\, \mathbb{R}^{N-1}) \times \phantom{}_0W^{1,1}_{p,\infty}(Q_T) \times \phantom{}_0W^{2,1}_p(Q_T; \, \mathbb{R}^3) \, ,\\
\phantom{}_0\mathcal{Y}_T & := \phantom{}_0W^{2,1}_p(Q_T;\, \mathbb{R}^{N-1}) \times \phantom{}_0W^{2,1}_p(Q_T; \, \mathbb{R}^3) \, .
\end{align*}
\subsubsection{Compatible extension of the boundary data}
The boundary operator for the problem \eqref{mass2}, \eqref{mass2tot}, \eqref{momentum2} on $S_T$ is chosen as simple as possible: linear and homogeneous (see \eqref{lateralq}, \eqref{lateralv}). Thus, we consider $\mathscr{B}(q, \, \varrho,\, v)$ given by
\begin{align*}
\mathscr{B}_1(q, \, \varrho, \, v) = \mathscr{B}_1(q) := & \nu \cdot \nabla q \, ,\\
\mathscr{B}_2 :\equiv & 0 \, ,\\
\mathscr{B}_3(q, \, \varrho, \, v) = \mathscr{B}_3(v) := & v \, .
\end{align*}
The operator $\mathscr{B}$ is acting on the space $\mathcal{X}_T$.
As usual for higher regularity, the choice of the initial conditions is restricted by the choice of the boundary operator. The conditions $q^0_i \in \text{Tr}_{\Omega \times \{0\}} W^{2,1}_p(Q_T)$, $v^0_i \in \text{Tr}_{\Omega \times \{0\}} W^{2,1}_p(Q_T)$ guarantee at first the existence of liftings $\hat{q}^0 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$ and $\hat{v}^0 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{3})$. It is now necessary to homogenise all boundary data in such a way that these liftings are also in the kernel of the boundary operator. In order to find $\hat{q}^0 \in W^{2,1}_p$ satisfying $\hat{q}^0(0) = q^0$ and $\mathscr{B}_1(\hat{q}^0) = \nu \cdot \nabla \hat{q}^0 = 0$ on $S_T$ and $\hat{v}^0\in W^{2,1}_p$ satisfying $\hat{v}^0(0) = v^0$ and $\mathscr{B}_3(\hat{v}^0) = \hat{v}^0 = 0$ on $S_T$, we refer to the $L^p-$ theory of the Neumann/Dirichlet problem for the heat equation (see among others the monograph \cite{ladu}). There is, in both cases, one necessary compatibility condition,
\begin{align*}
\nu \cdot \nabla q^0 = 0 \text{ on } \partial \Omega, \quad v^0 = 0 \text{ on } \partial \Omega \, ,
\end{align*}
which make sense as identities in $\text{Tr}_{\partial\Omega} W^{1-\frac{2}{p}}_p(\Omega) =W^{1-\frac{3}{p}}_p(\partial \Omega) $ and in $\text{Tr}_{\partial\Omega} W^{2-\frac{2}{p}}_p(\Omega) = W^{2-\frac{3}{p}}_p(\partial \Omega)$. In order to find an extension for $\varrho_0 \in W^{1,p}(\Omega)$, we solve the problem
\begin{align}\label{Extendrho0}
\partial_t \hat{\varrho}_0 + \divv(\hat{\varrho}_0 \, \hat{v}^0) =0, \quad \hat{\varrho}_0(0) = \varrho_0 \, .
\end{align}
For this problem, the Theorem 2 of \cite{solocompress} establishes unique solvability in $W^{1,1}_{p,\infty}(Q_T)$ and, among other, the strict positivity $\hat{\varrho}_0 \geq c_0(\Omega, \, \|\hat{v}^0\|_{W^{2,1}_p(Q_T; \, \mathbb{R}^3)}) \, \inf_{x \in \Omega} \varrho_0(x)$.
\section{Linearisation and reformulation as a fixed-point equation}\label{twomaps}
We shall present two different manners to linearise the equation $\mathscr{A}(u) = 0$ for $u \in \mathcal{X}_T$ with initial condition $u(0) = u_0$ in $\text{Tr}_{\Omega\times\{0\}} \, \mathcal{X}_T$:
\begin{itemize}
\item The first method is used to prove the statements on short-time existence in Theorem \ref{MAIN}, \ref{MAIN2};
\item The second technique shall be used to prove the global existence for restricted data in Theorem \ref{MAIN3};
\end{itemize}
The attentive reader will notice that the main estimate for the second linearisation techniques would also allow to prove the short-time existence. However, it has the drawback to be applicable only if the initial data possess more smoothness than generic elements of the state space $\mathcal{X}_T$. Thus, this technique does not allow to prove a semi-flow property. For this reason we think that, at the price of being lengthy, presenting the first method remains necessary.
In both cases, we start considering the problem to find $u =(q, \, \varrho ,\, v) \in \mathcal{X}_{T,+}$ such that $\widetilde{\mathscr{A}}(u) = 0$ and $u(0) = u_0$, which possesses the following structure:
\begin{align*}
\partial_t \varrho + \divv (\varrho \, v) = & 0 \, ,\\
R_{q}(\varrho, \, q) \, \partial_t q - \divv (\widetilde{M}(\varrho, \, q) \, \nabla q) = & g(x, \, t, \, q, \, \varrho,\, v, \, \nabla q, \, \nabla \varrho, \, \nabla v) \, ,\\
\varrho \, \partial_t v - \divv \mathbb{S}(\nabla v) = & f(x, \, t,\, q, \, \varrho,\, v, \, \nabla q, \, \nabla \varrho, \, \nabla v) \, .
\end{align*}
For the original problem, the functions $g$ and $f$ have the following expressions
\begin{align}\label{A1right}
& g(x, \, t,\, q, \, \varrho,\, v, \, \nabla q, \, \nabla \varrho, \, \nabla v) := (R_{\varrho}(\varrho,\, q) \, \varrho - R(\varrho,\, q)) \, \divv v - R_q(\varrho,\, q) \, v \cdot \nabla q\nonumber \\
& \qquad - \widetilde{M}_{\varrho}(\varrho, \, q) \, \nabla \varrho \cdot \tilde{b}(x, \,t) - \widetilde{M}_{q}(\varrho, \, q) \, \nabla q \cdot \tilde{b}(x, \,t) -\widetilde{M}(\varrho, \, q) \, \divv \tilde{b}(x, \,t) - \tilde{r}(\varrho, \, q) \, , \\[0.2cm]
& \label{A3right} f(x, \, t,\, q, \, \varrho,\, v, \, \nabla q, \, \nabla \varrho, \, \nabla v) := - P_{\varrho}(\varrho,\, q) \, \nabla \varrho - P_{q}(\varrho,\, q) \, \nabla q
- \varrho \, (v\cdot \nabla)v \nonumber\\
& \phantom{f(x, \, t,\, q, \, \varrho,\, v, \, \nabla q, \, \nabla \varrho, \, \nabla v) } \qquad+ R(\varrho, \,q) \cdot \tilde{b}(x, \,t) + \varrho \, \bar{b}(x, \,t) \, .
\end{align}
In the proofs, we however consider the abstract general form of the right-hand sides. We shall also regard $g$ and $f$ as functions of $x, \,t$ and the vectors $u$ and $D_x u$ and write $g(x, \,t, \, u, \, D_xu)$ etc.
\subsection{The first fixed-point equation}
For $u^* = (q^*, \, v^*)$ given in $ \mathcal{Y}_T$ (cf. \eqref{parabolicSTATESPACE}) and for unknowns $u = (q, \, \varrho, \, v)$, we consider the following system of equations
\begin{align}\label{linearT1}
\partial_t \varrho + \divv (\varrho \, v^*) = & 0 \, ,\\
\label{linearT2} R_{q}(\varrho, \,q^*) \, \partial_t q - \divv (\widetilde{M}(\varrho, \, q^*) \, \nabla q) = & g(x, \, t, \, q^*, \, \varrho,\, v^*, \, \nabla q^*, \, \nabla \varrho, \, \nabla v^*) \, ,\\
\label{linearT3} \varrho \, \partial_t v - \divv \mathbb{S}(\nabla v) = & f(x, \, t,\, q^*, \, \varrho,\, v^*, \, \nabla q^*, \, \nabla \varrho, \, \nabla v^*) \, ,
\end{align}
together with the initial conditions \eqref{initialq}, \eqref{initialrho}, \eqref{initialv} and the homogeneous boundary conditions \eqref{lateralv}, \eqref{lateralq}. Note that the continuity equation can be solved independently for $\varrho$. Once $\varrho$ is given, the problem \eqref{linearT2}, \eqref{linearT3} is linear in $(q, \, v)$.
We will show that the solution map $(q^*, \, v^*) \mapsto (q, \, v)$, denoted $\mathcal{T}$ is well defined from $\mathcal{Y}_T$ into itself. The solutions are unique in the class $\mathcal{Y}_T$. Clearly, a fixed point of $\mathcal{T}$ is a solution to $\widetilde{\mathscr{A}}(q, \, \varrho, \, v) = 0$.
\subsection{The second fixed-point equation}
We consider a given vector $\hat{u}^0 = (\hat{q}^0, \, \hat{\varrho}^0, \, \hat{v}^0) \in \mathcal{X}_T$ such that $\hat{q}^0$ and $\hat{v}^0$ satisfy the initial compatibility conditions. Moreover, we assume that $\hat{\varrho}^0$ obeys \eqref{Extendrho0}.
Consider a solution $u = (q, \,\varrho, \, v) \in \mathcal{X}_T$ to $\widetilde{\mathscr{A}}(u) = 0$. We introduce the differences $r := q - \hat{q}^0$, $w := v - \hat{v}^0$ and $\sigma := \varrho- \hat{\varrho}^0$, and the vector $\bar{u} := (r, \, \sigma, \, w)$. Clearly, $\bar{u}$ belongs to the space $\phantom{}_0\mathcal{X}_T$ of homogeneous initial conditions. The equations $\widetilde{\mathscr{A}}(u) = 0$ shall be equivalently re-expressed as a problem for the vector $\bar{u}$ via $\widetilde{\mathscr{A}}(\hat{u}^0 +\bar{u}) = 0$. The vector $\bar{u} = (r, \, \sigma, \, w)$ satisfies
\begin{align}\label{equationdiff1}
R_q \, \partial_t r - \divv (\widetilde{M} \, \nabla r) = g^1 := & g - R_q \, \partial_t \hat{q}^0 + \widetilde{M} \, \triangle \hat{q}^0 - \widetilde{M}_{\varrho} \, \nabla \varrho \cdot \nabla \hat{q}^0 \\
& - \widetilde{M}_{q} \, \nabla q \cdot \nabla \hat{q}^0\, ,\nonumber\\
\label{equationdiff2}
\partial_t \sigma + \divv(\sigma \, v) =& - \divv(\hat{\varrho}_0 \, w) \, ,\\
\label{equationdiff3}
\varrho \, \partial_t w - \divv \mathbb{S}(\nabla w) = & f^1 =: f - \varrho \partial_t \hat{v}^0 + \divv \mathbb{S}(\nabla \hat{v}^0) \, .
\end{align}
Herein, the coefficients $R, \, R_q$, etc.\ are evaluated at $(\varrho, \, q)$, while $g$ and $f$ correspond to \eqref{A1right} and \eqref{A3right}.
We next want to construct a fixed-point map to solve \eqref{equationdiff1}, \eqref{equationdiff2}, \eqref{equationdiff3} by linearising the operators $g^1$ and $f^1$ defined in \eqref{equationdiff1} and \eqref{equationdiff3}. At a point $u^* = (q^*, \, \varrho^*, \, v^*) \in \mathcal{X}_{T,+}$ (cf. \eqref{STATESPACEPOS}), we can expand as follows:
\begin{align*}
g = g(x, \, t,\, u^*, \, D_xu^*) + \int_{0}^1 & \{ (g_{q})^{\theta} \, (q-q^*) + (g_{\varrho})^{\theta} \, (\varrho-\varrho^*) + (g_v)^{\theta} \, (v-v^*) \\
& + (g_{q_x})^{\theta} \cdot (q_x-q^*_x) + (g_{\varrho_x})^{\theta} \, (\varrho_x- \varrho^*_x) + (g_{v_x})^{\theta} \cdot (v_x-v^*_x)\} \, d\theta\nonumber \, .
\end{align*}
Here the brackets $( \cdot )^{\theta}$, if applied to a function of $x, \, t$, $u$ and $D^1_x u$, stand for the evaluation at $(x, \, t,\, (1-\theta) \, u^* + \theta \, u, \, (1-\theta) \, D_xu^* + \theta \, D_xu)$. In short, in order to avoid the integral and the parameter $\theta$, we write
\begin{align}\label{Arightlinear}
g = &g(x, \, t,\, u^*, \, D_xu^*) + g_{q}(u, \, u^*) \, (q-q^*) + g_{\varrho}(u, \, u^*) \, (\varrho-\varrho^*)+ g_v(u, \, u^*) \, (v-v^*) \nonumber\\
& + g_{q_x}(u, \, u^*) \cdot (q_x-q^*_x) + g_{\varrho_x}(u, \, u^*) \, (\varrho_x- \varrho^*_x) + g_{v_x}(u, \, u^*) \cdot (v_x-v^*_x) \nonumber\\
=: & g(x, \, t,\, u^*, \, D_xu^*) + g^{\prime}(u, \, u^*) \, (u - u^*) \, .
\end{align}
We follow this scheme and write in short
\begin{align}\label{Arightlinear2}
g^1 = & g^1(x, \, t,\, \hat{q}^0, \, \hat{\varrho}^0, \, \hat{v}^0, \, \hat{q}_x^0, \, \hat{\varrho}_x, \, \hat{v}_x^0) + g^1_{q}(u, \, \hat{u}^0) \, r + g^1_{\varrho}(u, \, \hat{u}^0) \, \sigma + g^1_v(u, \, \hat{u}^0)\, w \nonumber\\
& + g^1_{q_x}(u, \, \hat{u}^0) \, r_x + g^1_{\varrho_x}(u, \, \hat{u}^0) \, \sigma_x + g^1_{v_x}(u, \, \hat{u}^0) \, w_x \, \nonumber\\
=: & \hat{g}^0 + (g^1)^{\prime}(u, \, \hat{u}^0) \, \bar{u} \, .
\end{align}
With obvious modifications, we have the same formula for $f^1$.
Now we construct the fixed-point map to solve \eqref{equationdiff1}, \eqref{equationdiff2}, \eqref{equationdiff3}. For a given vector $(r^*, \, w^*) \in \phantom{}_0\mathcal{Y}_T$, we define $q^* := \hat{q}^0 + r^*$ and $v^* := \hat{v}^0 + w^*$. We employ the abbreviation
\begin{align}\label{ustar}
u^* := & (q^*, \, \mathscr{C}(v^*), \, v^*) \in \mathcal{X}_{T,+} \, ,
\end{align}
where $\mathscr{C}$ is the solution operator to the continuity equation with initial datum $\varrho_0$.
For $\bar{u} := (r, \, \sigma, \, w)$, we next consider the linear problem
\begin{alignat}{2}
\label{linearT1second} R_q(\mathscr{C}(v^*), \, q^* ) \, \partial_t r - \divv (\widetilde{M}(\mathscr{C}(v^*), \, q^* ) \, \nabla r) =& \hat{g}^0 + (g^{1})^{\prime}(u^*, \, \hat{u}^0) \, \bar{u} \, ,& & \\
\label{linearT2second} \partial_t \sigma + \divv(\sigma \, v^*) = & - \divv(\hat{\varrho}_0 \, w) \, , & & \\
\label{linearT3second} \mathscr{C}(v^*) \, \partial_t w - \divv \mathbb{S}(\nabla w) = & \hat{f}^0 + (f^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u} \, , & &
\end{alignat}
with boundary conditions $\nu \cdot \nabla r = 0$ on $S_T$ and $w = 0$ on $S_T$ and with zero initial conditions. We will show that the solution map $(r^*, \, w^*) \mapsto (r, \, w)$, denoted as $\mathcal{T}^1$, is well defined from $\phantom{}_0\mathcal{Y}_T$ into itself.
\begin{rem}
If $(r, \, w)$ is a fixed point of $\mathcal{T}^1$, then $u := \hat{u}^0 + (r, \, \sigma, \, w)$ is a solution to $\widetilde{\mathscr{A}}(u) = 0$.
\end{rem}
\begin{proof}
To see this, we note first that a fixed point satisfies $\mathcal{T}^1(r, \, w) = (r, \, w)$, hence the following equations are valid:
\begin{alignat*}{2}
R_q(q, \, \mathscr{C}(v)) \, \partial_t r - \divv (\widetilde{M}(q, \, \mathscr{C}(v)) \, \nabla r) =& \hat{g}^0 + (g^1)^{\prime}((q, \, \mathscr{C}(v), \, v), \, \hat{u}^0) \, \bar{u} \, ,& & \\
\partial_t \sigma + \divv(\sigma \, v) = & - \divv(\hat{\varrho}_0 \, w) \, , & & \\
\mathscr{C}(v) \, \partial_t w - \divv \mathbb{S}(\nabla w) = & \hat{f}^0 + (f^1)^{\prime}((q, \, \mathscr{C}(v), \, v), \, \hat{u}^0) \, \bar{u} \, . & &
\end{alignat*}
Adding to the second equation the identity \eqref{Extendrho0}, valid by construction, we see that $\tilde{\varrho} := \hat{\varrho}^0 + \sigma$ is a solution to the continuity equation with velocity $v$ and initial data $\varrho^0$. Thus $\tilde{\varrho} = \mathscr{C}(v)$ (uniqueness for the continuity equation, cf. Proposition \ref{solonnikov2} below). Now, we see by the definitions of $(g^1)^{\prime}$ and $(f^1)^{\prime}$ (cf.\ \eqref{Arightlinear2}) that
\begin{align*}
\hat{g}^0 + (g^1)^{\prime}((q, \, \mathscr{C}(v), \, v), \, \hat{u}^0) \, \bar{u} = & \hat{g}^0 + (g^1)^{\prime}((\hat{q}^0+r, \, \hat{\varrho}^0+\sigma, \, \hat{v}^0 + w), \, \hat{u}^0) \, (r, \, \sigma, \, w)\\
=& g^1(\hat{q}^0+r, \, \hat{\varrho}^0+\sigma, \, \hat{v}^0 + w)
\end{align*}
and, analogously, $\hat{f}^0 + (f^1)^{\prime}((q, \, \mathscr{C}(v), \, v), \, \hat{u}^0) \, \bar{u} = f^1$. Thus we recover a solution to the equations \eqref{equationdiff1}, \eqref{equationdiff2} and \eqref{equationdiff3}.
\end{proof}
\subsection{The self-mapping property}
Assuming for a moment that the map $\mathcal{T}$, $(q^*, \, v^*) \mapsto (q, \, v)$ via the solution to \eqref{linearT1}, \eqref{linearT2}, \eqref{linearT3} is well defined in the state space $\mathcal{Y}_T$, then the main difficulty to prove the existence of a fixed-point is to show that $\mathcal{T}$ maps some closed bounded set of $\mathcal{Y}_T$ into itself. If $\mathcal{T}$ is well--defined and continuous, we shall rely on the continuous estimates
\begin{align}\label{CONTROLLEDGROWTH}
\|(q, \, v)\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1}) \times W^{2,1}_p(Q_t; \, \mathbb{R}^{3})} \leq \Psi(t, \, R_0, \, \|(q^*, \, v^*)\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1}) \times W^{2,1}_p(Q_t; \, \mathbb{R}^{3})}) \, ,
\end{align}
valid for all $t \leq T$ with a function $\Psi$ being continuous in all arguments. Here $R_0$ is a parameter standing for the magnitude of the initial data $q^0$, $\varrho_0$ and $v^0$ and of the external forces $b$ in their respective norms. An important observation of the paper \cite{solocompress} is the following.
\begin{lemma}\label{selfmapT}
Suppose that $R_0 > 0$ is fixed. Suppose that for all $t \leq T$, the inequality \eqref{CONTROLLEDGROWTH} is valid with a continuous function $\Psi = \Psi(t, \, R_0, \, \eta) \geq 0$ defined for all $t \geq 0$ and $\eta \geq 0$ and increasing in these arguments. Assume moreover that $\Psi(0, \, R_0, \, \eta) = \Psi^0(R_0) > 0$ is independent of $\eta$. Then there are $t_0 = t_0(R_0) > 0$ and $\eta_0 = \eta_0(R_0) > 0$ such that $\mathcal{T} \, (q^*, \, v^*) := (q, \, v)$ maps the closed ball with radius $\eta_0$ in $\mathcal{Y}_{t_0}$ into itself.
\end{lemma}
\begin{proof}
We have to show that $\eta_0 := \inf \{\eta > 0 \, : \, \Psi(t_0, \, R_0, \, \eta) \leq \eta\} > 0$, since then \eqref{CONTROLLEDGROWTH} implies
\begin{align*}
\|\mathcal{T} \, (q^*, \, v^*)\|_{\mathcal{Y}_{t_0}} \leq \Psi(t_0, \, R_0, \, \|(q^*, \, v^*)\|_{\mathcal{Y}_{t_0}}) \leq \|(q^*, \, v^*)\|_{\mathcal{Y}_{t_0}} \, ,
\end{align*}
whenever $\|(q^*, \, v^*)\|_{\mathcal{Y}_{t_0}} \leq \eta_0$. Hence $\mathcal{T} $ maps the closed ball with radius $\eta_0$ in $\mathcal{Y}_{t_0}$ into itself. Now $\eta_0 = 0$ yields $\Psi(t_0, \, R_0, \, \eta) =0$ by the continuity of $\Psi$ (and since $\Psi$ is nonnegative by assumption), hence implies the contradiction $\Psi(0, \, R_0, \, \eta) = \Psi^0(R_0) = 0$.
\end{proof}
The strategy for proving Theorem \ref{MAIN3} shall be quite similar. We use here the map $\mathcal{T}^1$, $(r^*, \, w^*) \mapsto (r, \, w)$ defined via solution to \eqref{linearT1second}, \eqref{linearT2second} and \eqref{linearT3second}. In this case, the fixed-point we look for is in the space $\phantom{}_0\mathcal{Y}_T$ and we expect a continuity estimate of the type
\begin{align}\label{CONTROLLEDGROWTH2}
\|(r, \, w)\|_{\phantom{}_0\mathcal{Y}_T} \leq \Psi(T, \, R_0, \, R_1, \, \|(r^*, \, w^*)\|_{\phantom{}_0\mathcal{Y}_T}) \, .
\end{align}
Here $R_0$ stands for magnitude of the initial data $q^0$, $\varrho_0$ and $v^0$ and of the external forces $b$, while the parameter $R_1$ expresses the distance of these initial data to a stationary/equilibrium solution.
\begin{lemma}\label{selfmapTsecond}
Suppose that $T > 0$ and $R_0 >0$ are arbitrary but fixed. Suppose that \eqref{CONTROLLEDGROWTH2} is valid with a continuous function $\Psi = \Psi(T, \, R_0, \, R_1, \, \eta)$ defined for all $R_1 \geq 0$ and $\eta \geq 0$, and increasing in these arguments. Assume moreover that $\Psi(T, \, R_0, \, 0, \, \eta) = 0$ and that $\Psi(T, \, R_0, \, R_1, \, 0) > 0$. Then, there is $\delta > 0$ such that if $R_1 \leq \delta$, we can find $\eta_0 > 0$ such that $\mathcal{T}^1$ maps the set $\{\bar{u} \in \phantom{}_0\mathcal{Y}_T \, : \, \|\bar{u}\|_{\mathcal{Y}_T} \leq \eta_0\}$ into itself.
\end{lemma}
The proof can be left to the the reader as it is completely similar to the one of Lemma \ref{selfmapT}.
In order to prove the Theorems we shall therefore prove the continuity estimate \eqref{CONTROLLEDGROWTH}, \eqref{CONTROLLEDGROWTH2}. This is the main object of the next sections.
\section{Estimates of linearised problems}\label{ESTI}
In this section, we present the estimates on which our main results in Theorem \ref{MAIN}, \ref{MAIN2} are footing. In order to motivate the procedure, we recall that we want to prove the continuity estimate \eqref{CONTROLLEDGROWTH} for the map $\mathcal{T}$ in Section \ref{twomaps}. With this fact in mind it shall be easier for the reader to follow the technical exposition. The proof is split into several subsections. To achieve also more simplicity in the notation, we introduce indifferently for a function or vector field $f \in W^{2,1}_p(Q_T; \, \mathbb{R}^k)$ ($p > 3$ fixed, $k\in \mathbb{N}$) and $t \leq T$ the notation
\begin{align}\label{Vfunctor}
\mathscr{V}(t; \, f) := \|f\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^k)} + \sup_{s \leq t} \|f(\cdot, \, s)\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^k)} \, .
\end{align}
Moreover, we will need H\"older half-norms. For $\alpha, \, \beta \in [0,\, 1]$ and $f$ scalar--valued, we denote
\begin{align*}
[f]_{C^{\alpha}(\Omega)} := \sup_{x \neq y \in \Omega} \frac{|f(x) - f(y)|}{|x-y|^{\alpha}}, \quad [f]_{C^{\alpha}(0,T)} := \sup_{t \neq s \in [0,T]} \frac{|f(t) - f(s)|}{|t-s|^{\alpha}}\\
[f]_{C^{\alpha,\beta}(Q_T)} := \sup_{t \in [0, \, T]} [f(\cdot, \, t)]_{C^{\alpha}(\Omega)} + \sup_{x \in \Omega} [f(x, \cdot)]_{C^{\beta}(0,T)} \, .
\end{align*}
The corresponding H\"older norms $\|f\|_{C^{\alpha}(\Omega)}$, $\|f\|_{C^{\alpha}(0,T)}$ and $f \in C^{\alpha,\beta}(Q_T)$ are defined adding the corresponding $L^{\infty}-$norm to the half-norm.
\subsection{Estimates for a linearised problem in the variables $q_1,\ldots, q_{N-1}$}
We commence with a statement concerning the linearisation of $\widetilde{\mathscr{A}}^1$ (cf.\ \eqref{A1equiv}).
\begin{prop}\label{A1linmain}
Assume that $R_q, \, \widetilde{M}: \, \mathbb{R}_+ \times \mathbb{R}^{N-1} \rightarrow \mathbb{R}^{(N-1)\times (N-1)}$ are maps of class $C^1$ into the set of positively definite matrices. Suppose further that $q^{*} \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$ and $\varrho^* \in W^{1,1}_{p,\infty}(Q_T)$ ($p > 3$) are given, where $\varrho^*$ is strictly positive. We denote $R_q^* := R_q(\varrho^*, \, q^{*})$ and $ \widetilde{M}^* := \widetilde{M}(\varrho^*, \, q^{*})$. For $t \leq T$, we further define
\begin{align*}
m^*(t) := \inf_{(x,s) \in Q_t} \varrho^*(x, \, s) > 0 \, , \quad M^*(t) := \sup_{(x,s) \in Q_t} \varrho^*(x, \, s) \, .
\end{align*}
We assume that $g \in L^p(Q_T; \, \mathbb{R}^{N-1})$ and $q^0 \in W^{2-\frac{2}{p}}(\Omega)$ are given and that $\nu \cdot \nabla q^0(x) = 0$ in the sense of traces on $\partial \Omega$. Then there is a unique $q \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$, solution to the problem
\begin{align}\label{qlinear}
R_q^* \, q_t - \divv (\widetilde{M}^{*} \, \nabla q) = g \, \text{ in } Q_T, \quad \nu \cdot \nabla q = 0 \text{ on } S_T, \, \quad
q(x, \, 0) = q^0(x) \text{ in } \Omega \, .
\end{align}
Moreover, there is a constant $C$ independent on $T$, $q$, $\varrho^*$ and $q^*$ as well as a continuous function $\Psi_1 = \Psi_1(t, \, a_1,\ldots,a_6)$ defined for all $t \geq 0$ and all numbers $a_1, \ldots, a_6 \geq 0$ such that for all $t \leq T$ and $0 < \beta \leq 1$, it holds that
\begin{align*}
& \mathscr{V}(t; \, q) \leq C \, \Psi_{1,t}\, \left[(1+[\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\tfrac{2}{\beta}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|g\|_{L^p(Q_t)}\right] \, , \\[0.1cm]
& \Psi_{1,t} := \Psi_1(t, \, (m^*(t))^{-1}, \, M^*(t), \, \|q^*(0)\|_{C^{\beta}(\Omega)}, \,
\mathscr{V}(t; \, q^*), \, [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}, \, \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)}) \, .
\end{align*}
In addition, $\Psi_1$ is increasing in all arguments and $\Psi_1(0, \, a_1,\ldots,a_6) = \Psi_1^0(a_1, \, a_2, \, a_3)$ does not depend on the last three arguments.
\end{prop}
\begin{proof}
We prove here only the unique solvability. Due to the technicality, the proof of the estimate will be given separately hereafter.
After computation of the divergence and inversion of $R_q^*$ in \eqref{qlinear}, the vector field $q$ is equivalently asked to satisfy the relations
\begin{align}\label{petrovskisyst}
q_t - [R_q^*]^{-1} \, \widetilde{M}^* \, \triangle q = [R_q^*]^{-1} \, g + [R_q^*]^{-1} \, \nabla \widetilde{M}^* \cdot \nabla q \, \, .
\end{align}
The matrix $A^* := [R_q^*]^{-1} \, \widetilde{M}^*$ is the product of two symmetric positive semi-definite matrices. The Lemma \ref{ALGEBRA} implies that the eigenvalues are real and strictly positive. Moreover,
\begin{align}\label{EVPROD}
\frac{\lambda_{\min}(\widetilde{M}^*)}{\lambda_{\max}(R_q^*)} \leq \lambda_{\min}(A^*) \leq \lambda_{\max}(A^*) \leq \frac{\lambda_{\max}(\widetilde{M}^*)}{\lambda_{\min}(R_q^*)} \, .
\end{align}
Thus, the equations \eqref{petrovskisyst} are a linear parabolic system in the sense of Petrovski (\cite{ladu}, Chapter VII, Paragraph 8, Definition 2). We apply the result of \cite{sologeneral}, Chapter V recapitulated in \cite{ladu}, Chapter VII, Theorem 10.4, enriched and refined in several contributions of the school of maximal parabolic regularity as for instance in \cite{denkhieberpruess}, and we obtain the unique solvability. Note that in the case of the equations \eqref{petrovskisyst}, this machinery does not need to be applied in its full complexity. The reason is that the differential operator of second order in space is the Laplacian in each row of the system. Using this fact, the continuity estimate for the system \eqref{petrovskisyst} can be established by elementary means, as revealed by the proof of Lemma \ref{A1linUMFORM} below (Appendix, Section \ref{technique}). From the estimate we can easily pass to solvability by linear continuation.
\end{proof}
Due to its technicality, the proof of the estimate is split into several steps. The first step, accomplished in the following Lemma, is the principal estimate. Subsequent statements are needed to attain the bound as formulated in Proposition \ref{A1linmain}.
\begin{lemma}\label{A1linUMFORM}
We adopt the assumptions of Proposition \ref{A1linmain}. Then for $\beta \in ]0, \, 1]$ arbitrary, there is a constant $C$ independent on $T$, $q$, $\varrho^*$ and $q^*$ such that, for all $t \leq T$,
\begin{align*}
\mathscr{V}(t; \, q) \leq & C \, \phi^*_{0,t} \, (1+ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\frac{2}{\beta}} \, (\|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|q\|_{W^{1,0}_p(Q_t)}) \\
& + C \, \phi^*_{1,t} \, (\|g\|_{L^p(Q_t)} + \|\nabla \varrho^* \cdot \nabla q\|_{L^p(Q_t)} + \|\nabla q^* \cdot \nabla q\|_{L^p(Q_t)} ) \, .
\end{align*}
For $i = 0, \, 1$, there is a continuous function $\phi_{i}^* = \phi^*_i(a_1, \, a_2, \, a_3)$, defined for $a_1,\, a_2,\, a_3 \geq 0$ and increasing in each argument, such that $ \phi^*_{i,t} = \phi^*_i((m^*(t))^{-1}, \, M^*(t), \, \|q^*\|_{L^{\infty}(Q_t; \, \mathbb{R}^{N-1})})$.
\end{lemma}
\begin{rem}\label{eigenvaluesandlipschitzconstants}
The proof shall moreover show that the growth of $\phi^*_{0,t}, \, \phi^*_{1,t}$ can be estimated by a function of the minimal/maximal eigenvalues of the matrices $R_q^*$ and $ \widetilde{M}^*$, and of their local Lipschitz constants over the range of $(\varrho^*, \, q^*)$. To extract this point more easily, we define for a Lipschitz continuous matrix valued mapping $A: \, \mathbb{R}^+\times \mathbb{R}^{N-1} \rightarrow \mathbb{R}^{N-1} \times \mathbb{R}^{N-1}$, taking values in the positive definite matrices (for instance $A = R_q$ or $A = \widetilde{M}$), and for $t \geq 0$ functions
\begin{align*}
\lambda_0(t, \, A^*) := & \inf_{(x, \, s) \in Q_t} \lambda_{\min}[A(\varrho^*(x, \,s), \, q^*(x, \, s))]\, ,\\
\lambda_1(t, \, A^*) := & \sup_{(x, \, s) \in Q_t} \lambda_{\max}[A(\varrho^*(x, \,s), \, q^*(x, \, s))]\, ,\\
L(t, \, A^*) := & \sup_{(x, \, s) \in Q_t} |\partial_{\varrho}A(\varrho^*(x, \,s), \, q^*(x, \, s))| + \sup_{(x, \, s) \in Q_t} |\partial_{q}A(\varrho^*(x, \,s), \, q^*(x, \, s))|\, .
\end{align*}
It is possible to reinterpret these expressions as increasing functions of $ (m^*(t))^{-1}$, of $M^*(t)$, and of $\|q^*\|_{L^{\infty}(Q_t; \, \mathbb{R}^{N-1})}$.
In the statement of Lemma \ref{A1linUMFORM}, we then can choose
\begin{align}\label{LinftyFACTORS}
\phi^*_{0,t} := & \frac{\lambda_{0}(t, \, R_q^*)+\lambda_{1}(t, \, \widetilde{M}^*)}{\lambda_{0}^{\tfrac{3}{2}}(t, \, R_q^*)} \, \frac{\max\{1, \, \frac{\lambda_{1}(t, \, \widetilde{M}^*)}{\lambda_{0}(t, \, R_q^*)}\}}{\min\{1, \, \frac{\lambda_{0}(t, \, \widetilde{M}^*)}{\lambda_{1}(t, \, R_q^*)}\}} \times \nonumber\\
& \times \left(1+ \frac{ \max\{1, \, \frac{\lambda_{1}(t, \, \widetilde{M}^*)}{\lambda_{0}(t, \, R_q^*)}\}}{ \min\{1, \, \frac{\lambda_{0}(t, \, \widetilde{M}^*)}{\lambda_{1}(t, \, R_q^*)}\}} \, \frac{(\lambda_{1}(t, \, \widetilde{M}^*) + \lambda_{0}(t, \, R_q^*)) \, (L(t, \, \widetilde{M}^*) + L(t, \, R_q^*) ) }{\lambda_{0}^{\frac{5}{2}}(t, \, R_q^*)}\right) \, ,\nonumber\\
\phi^*_{1,t} := & \frac{ (1+L(t, \, \widetilde{M}^*)) \, \max\{1, \, \frac{\lambda_{1}(t, \, \widetilde{M}^*)}{\lambda_{0}(t, \, R_q^*)}\}}{\lambda_{0}^{\tfrac{3}{2}}(t, \, R_q^*) \, \min\{1, \, \frac{\lambda_{0}(t, \, \widetilde{M}^*)}{\lambda_{1}(t, \, R_q^*)}\}} \, \, .
\end{align}
\end{rem}
The proof is interesting, but lengthy. Since the use of H\"older norms to control the dependence on the coefficients is a classical tool, we prove these statements in the Appendix, Section \ref{technique}.
In order to prove Proposition \ref{A1linmain}, we need some reformulation of the estimate of Lemma \ref{A1linUMFORM}.
\begin{coro}\label{A1linUMFORMsecond}
We adopt the situation of Proposition \ref{A1linmain}, and for $\beta \in ]0, \, 1]$, we denote
\begin{align*}
\phi_{2,t}^* & := \phi^*_{0,t} \, (1+ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\frac{2}{\beta}} \, ,\\
B^*_t & := \phi^*_{2,t} + 1 +(\phi^*_{1,t})^{\tfrac{2p}{p-3}} \, (\sup_{s\leq t} \|q^*(s)\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)})^{\tfrac{2p}{p-3}} \, .
\end{align*}
Then there are constants $c_1, \, c_2$, independent on $T$, $q$, $\varrho^*$ and $q^*$, such that
\begin{align*}
\mathscr{V}(t; \, q) \leq c_1 \, \big(1+ t^{\tfrac{1}{p}}\, B^*_t \, \exp(c_2 \, t\, [B^*_t]^p)\big) \, (\phi^*_{2,t} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \phi^*_{1,t} \, \|g\|_{L^p(Q_t)}) \, .
\end{align*}
\end{coro}
\begin{proof}
We start from the main inequality of Lemma \ref{A1linUMFORM}. Raising it to the $p-$th power, we obtain
\begin{align}\label{ControlProducts0}
\mathscr{V}^p(t, \, q) \leq & C_p \, (\phi^*_{2,t})^p \, (\|q^0\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} + \|q\|^p_{W^{1,0}_p(Q_t)}) \nonumber\\
& + C_p \,(\phi^*_{1,t})^p (\|g\|_{L^p(Q_t)}^p + \|\nabla \varrho^* \cdot \nabla q\|^p_{L^p(Q_t)} + \|\nabla q^* \cdot \nabla q\|^p_{L^p(Q_t)} ) \, .
\end{align}
Since $\|q\|_{W^{1,0}_p(Q_t)}^p = \int_{0}^t \|q(s)\|_{W^{1,p}(\Omega)}^p \, ds$ and $\|q(s)\|_{W^{1,p}(\Omega)} \leq \sup_{\tau\leq s} \|q(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}$, we see that $\|q\|_{W^{1,0}_p(Q_t)}^p \leq \int_{0}^t \sup_{\tau\leq s} \|q(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p \, ds$.
Owing to the G.~N. inequality \eqref{gagliardo}, $$\|\nabla q(s)\|_{L^{\infty}(\Omega)} \leq C_1 \, \|D^2q(s)\|_{L^p(\Omega)}^{\alpha} \, \|q(s)\|_{L^p(\Omega)}^{1-\alpha} + C_2 \, \|q(s)\|_{L^p(\Omega)}$$ for $\alpha := \tfrac{1}{2}+\tfrac{3}{2p}$. Employing also Young's inequality, $a \, b \leq \epsilon \, a^{\tfrac{1}{\alpha}} + c_{\alpha} \, \epsilon^{-\tfrac{\alpha}{1-\alpha}} \, b^{\tfrac{1}{1-\alpha}}$, valid for all $\epsilon > 0$ and $a, \, b > 0$ it follows that
\begin{align}\label{ControlProducts1}
& \|\nabla \varrho^* \cdot \nabla q\|_{L^p(Q_t)}^p \leq \int_{0}^t |\nabla \varrho^*(s)|_{p}^p \, |\nabla q(s)|_{\infty}^p \, ds\nonumber\\
& \leq C_1 \, \int_{0}^t |\nabla \varrho^*(s)|_{p}^p \, |D^2q(s)|_{p}^{p\alpha} \, |q(s)|_{p}^{p(1-\alpha)} \, ds + C_2 \,\int_{0}^t |\nabla \varrho^*(s)|_{p}^p \, |q(s)|_{p}^p \, ds\nonumber\\
& \leq \epsilon \, \int_{0}^t |D^2q(s)|_{p}^{p} \, ds + c_{\alpha} \, \epsilon^{-\tfrac{\alpha}{1-\alpha}} \, \int_{0}^t |\nabla \varrho^*(s)|_{p}^{\frac{p}{1-\alpha}} \, |q(s)|_{p}^{p} \, ds + C_2 \,\int_{0}^t |\nabla \varrho^*(s)|_{p}^p \, |q(s)|_{p}^p \, ds\nonumber\\
& \leq \epsilon \, \int_{0}^t |D^2q(s)|_{p}^{p} \, ds + \int_{0}^t |q(s)|_{L^p}^{p} \, (c_{\alpha} \, \epsilon^{-\frac{\alpha}{1-\alpha}}\, |\nabla \varrho^*(s)|_{p}^{\frac{p}{1-\alpha}} + C_2\, |\nabla \varrho^*(s)|_{p}^p) \, ds \, .
\end{align}
Here we have denoted by $| \cdot |_{r}$ the norm in $L^r(\Omega)$ in order to save room. Similarly,
\begin{align}\label{ControlProducts2}
\|\nabla q^* \cdot \nabla q\|_{L^p(Q_t)}^p \leq & \epsilon \, \int_{0}^t |D^2q(s)|_{p}^{p} \, ds + \int_{0}^t |q(s)|_{p}^{p} \, (c_{\alpha} \, \epsilon^{-\frac{\alpha}{1-\alpha}} \, |\nabla q^*(s)|_{p}^{\frac{p}{1-\alpha}} + C_2\, |\nabla q^*(s)|_{p}^p) \, ds\, .
\end{align}
In \eqref{ControlProducts1} and \eqref{ControlProducts2} we estimate roughly $ \|q(s)\|_{L^p(\Omega)} \leq \sup_{\tau \leq s} \|q(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}$. Using the abbreviation $F^*(s) := \|\nabla q^*(s)\|_{L^p(\Omega)}^p+ \|\nabla \varrho^*(s)\|_{L^p(\Omega)}^p $, it follows that
\begin{align}\label{ControlProducts3}
& \|\nabla \varrho^* \cdot \nabla q\|^p_{L^p(Q_t)} + \|\nabla q^* \cdot \nabla q\|^p_{L^p(Q_t)} \leq 2 \, \epsilon \, \int_{0}^t |D^2q(s)|_{p}^{p} \, ds \\
& \qquad + \int_{0}^t \sup_{\tau\leq s} \|q(s)\|_{W^{2-\frac{2}{p}}_p(\Omega))}^p\, [ c_{\alpha} \, \epsilon^{-\frac{\alpha}{1-\alpha}} \, (F^*(s))^{\frac{1}{1-\alpha}} + C_2\, F^*(s)] \, ds \, . \nonumber
\end{align}
Recalling \eqref{ControlProducts0}, we choose $\epsilon = \frac{1}{4\,C_p\, (\phi^*_{1,t})^p}$. This, in connection with \eqref{ControlProducts0}, \eqref{ControlProducts3}, now yields
\begin{align}\label{ControlProducts4}
\frac{1}{2} \, \mathscr{V}^p(t; \, q) & \leq c_p \, ((\phi^*_{2,t})^p \, \|q^0\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} + (\phi^*_{1,t})^p \, \|g\|_{L^p(Q_t)}^p + E(t) \, \int_{0}^t f(s) \, ds) \, ,\\
f(s) & := \sup_{\tau \leq s} \|q(s)\|^p_{W^{2-\frac{2}{p}}_p(\Omega)}\, ,\nonumber\\
E(t)& := (\phi^*_{2,t})^p + \tilde{c}_{p,\alpha} \, (\phi^*_{1,t})^{\frac{p}{1-\alpha}} \, \sup_{s \leq t} (F^*(s))^{\frac{1}{1-\alpha}}
+ C_2 \, (\phi^*_{1,t})^p \, \sup_{s \leq t} F^*(s) \, .\nonumber
\end{align}
The latter inequality implies that $f(t) \leq A(t) + E(t) \, \int_0^t f(s) \, ds$. By Gronwall's Lemma, we obtain that $f(t) \leq A(t) \, \exp(t \, E(t))$, which means that
\begin{align}\label{supi}
\sup_{s \leq t} \|q(s)\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} \leq c_p \, ((\phi^*_{2,t})^p \, \|q^0\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} + (\phi^*_{1,t})^p \, \|g\|_{L^p(Q_t)}^p) \,\exp\left(t \, E(t)\right) \, .
\end{align}
Combining \eqref{supi} and \eqref{ControlProducts4} we obtain that
\begin{align}\label{supii}
\frac{1}{2} \, \mathscr{V}^p(t; \, q) \leq c_p \, [(\phi^*_{2,t})^p \, \|q^0\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} + (\phi^*_{1,t})^p \, \|g\|_{L^p(Q_t)}^p] \, [1+ t\, E(t) \, \exp(t\, E(t))] \, .
\end{align}
Estimating $F^*(s) \leq (\sup_{s \leq t} \|q^*(s)\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)})^p$, we obtain after another application of Young's inequality with power $1/\alpha$ that
\begin{align*}
E(t) & \leq c_{p,\alpha} \, [(\phi^*_{2,t})^p + 1 +(\phi^*_{1,t})^{\tfrac{p}{1-\alpha}} \, (\sup_{s\leq t} \|q^*(s)\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)})^{\frac{p}{1-\alpha}}] \, .
\end{align*}
We raise both sides of \eqref{supii} to the power $1/p$. Recall also that $\alpha =\frac{1}{2}+\frac{3}{2p}$ and the claim follows.
\end{proof}
To conclude the section, we show how to obtain the estimate of Proposition \ref{A1linmain} as stated. We start from the Lemma \ref{A1linUMFORMsecond}, in which $\phi_{2,t}^* := \phi_{0,t}^* \, (1+ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\frac{2}{\beta}}$. Moreover, Lemma \ref{A1linUMFORM} shows that $\phi^*_{0,t} = \phi^*_0((m^*(t))^{-1}, \, M^*(t), \, \|q^*\|_{L^{\infty}(Q_t)})$ is an increasing function of its arguments.
First, we roughly estimate
\begin{align}\label{rough}
\phi_{2,t}^* \leq & \phi^*_{0,t} \, [1+ [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}]^{\frac{2}{\beta}}\, [1+ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}]^{\frac{2}{\beta}} \, .
\end{align}
Making use of the Lemma \ref{HOELDERlemma}, we can further estimate the quantities $[q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}$ and $\|q^*\|_{L^{\infty}(Q_t)}$ in the latter expression. Define $\gamma$ as in Lemma \ref{HOELDERlemma}. For $0< \beta < \min\{1, \, 2-\frac{5}{p}\}$ it follows that
\begin{align}\label{hoeldercontrol}
\|q^*\|_{C^{\beta,\frac{\beta}{2}}(Q_t)} \leq C(t) \, (\|q^*(0)\|_{C^{\beta}(\Omega)} + t^{\gamma} \, \mathscr{V}(t; \, q^*)) \, ,
\end{align}
where we assume for simplicity $C(t) \geq 1$ in Lemma \ref{HOELDERlemma}. Using that $\phi^*$ is increasing in its arguments, we obtain with the help of \eqref{hoeldercontrol}
\begin{align*}
\phi^*_{0,t} \, [1+ [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}]^{\frac{2}{\beta}} \leq & \phi^*_0((m^*(t))^{-1}, \, M^*(t), \, C(t) \, ( \|q^*(0)\|_{C^{\beta}(\Omega)} + t^{\gamma} \, \mathscr{V}(t; \, q^*))) \times\\
& \times [1+ C(t) \, ( \|q^*(0)\|_{C^{\beta}(\Omega)} + t^{\gamma} \, \mathscr{V}(t; \, q^*) ) ]^{\frac{2}{\beta}}\, .
\end{align*}
The latter expression is next reinterpreted as a function $\phi^*_3$ of the arguments $t$, $(m^*(t))^{-1}$, $M^*(t)$, $\|q^*(0)\|_{C^{\beta}(\Omega)}$ and $\mathscr{V}(t; \, q^*)$, in that order. This means that for $t \geq 0$ and $a_1, \ldots, a_4 \geq 0$, we define
\begin{align*}
\phi^*_3(t,\, a_1, \, \ldots, a_4) = \phi^*_0(a_1, \, a_2,\, \, C(t) \, (a_3 + t^{\gamma} \, a_4)) \, [1+ C(t)\, ( a_3 + t^{\gamma} \, a_4 ) ]^{\frac{2}{\beta}} \, .
\end{align*}
This definition shows in particular that $\phi^*_3(0,\, a_1, \, \ldots, a_4) = \phi^*_0(a_1, \, a_2, \, C_0 \, a_3)\, (1+ C_0 \, a_3)^{\frac{2}{\beta}} $ is independent on $a_4$. Moreover, in view of \eqref{rough},
\begin{align}\label{pii}
\phi_{2,t}^* \leq \phi^*_3(t,\, (m^*(t))^{-1}, \, M^*(t), \, \|q^*(0)\|_{C^{\beta}(\Omega)}, \, \mathscr{V}(t; \, q^*)) \, [1+ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}]^{\frac{2}{\beta}} \, .
\end{align}
We next invoke Lemma \ref{A1linUMFORMsecond}, where we have shown that
\begin{align}\label{piii}
\mathscr{V}(t; \, q) \leq c_1 \, [\phi^*_{2,t} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \phi^*_{1,t} \, \|g\|_{L^p(Q_t)}] \, [1+ t^{\tfrac{1}{p}}\, B^*_t \, \exp(c_2 \, t\, [B^*_t]^p)] \, ,
\end{align}
and that $B^*_t = \phi^*_{2,t} + 1 +(\phi^*_{1,t})^{\tfrac{2p}{p-3}} \, (\sup_{s\leq t} \|q^*(s)\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)})^{\tfrac{2p}{p-3}}$. Due to \eqref{pii}, we can bound $\phi_2^*$ and see that $t^{\frac{1}{p}} \, B^*_t$ is estimated by a function $\phi_4^*$ of the quantities $t$, $(m^*(t))^{-1}$, $M^*(t)$, $\|q^*(0)\|_{C^{\beta}(\Omega)}$, $\mathscr{V}(t; \, q^*)$, and $ [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}$, $\|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)}$, in that order. The function $\phi_4^*$ is defined via
\begin{align*}
\phi_4^*(t, \, a_1,\ldots,a_6) = t^{\frac{1}{p}} \, [1+\phi_3^*(t, \, a_1,\ldots,a_4) \, (1+a_5)^{\frac{\beta}{2}} + \phi_{1}^*(a_1,a_2,a_3) \, (a_4 + a_6)^{\frac{2p}{p-3}}] \, .
\end{align*}
In particular, $\phi_4^*(0, \, a) = 0$. Moreover we obtain with the help of \eqref{piii} that
\begin{align*}
\mathscr{V}(t; \, q) \leq & c_1 \, (1+ \phi^*_{4,t} \, \exp(c_2 \, [\phi^*_{4,t}]^p)) \, [\phi^*_{3,t} \, (1+[\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\tfrac{2}{\beta}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \phi^*_{1,t} \, \|g\|_{L^p(Q_t)}] \, \, .
\end{align*}
We define $\Psi_{1,t} := (1+ \phi^*_{4,t} \, \exp(c_2 \, [\phi^*_{4,t}]^p)) \, \max\{\phi^*_{3,t}, \, \phi^*_{1,t}\}$. More precisely, for all non-negative numbers $a_1,\ldots,a_6$, we define
\begin{align*}
\Psi_1(t, \, a_1,\ldots,a_6) := & (1+ \phi^*_{4}(t, \, a_1,\ldots,a_6) \, \exp(c_2 \, [\phi^*_{4}(t, \, a_1,\ldots,a_6)]^p)) \times\\
& \times \max\{\phi^*_{3}(t,\, a_1, \ldots, a_4) , \, \phi^*_{1}(a_1, \, a_2, \, a_3) \} \, .
\end{align*}
By means of the properties of $\phi_3^*$ and $\phi_4^*$, we verify that
\begin{align}\label{psi0formula}
\Psi_1(0, \, a_1,\ldots,a_6) & = \max\{\phi^*_0(a_1, \, a_2, \, C_0 \, a_3)\, (1+ C_0 \, a_3)^{\frac{2}{\beta}}, \, \phi^*_{1}(a_1, \, a_2, \, a_3) \}\nonumber\\
&=: \Psi^0_1(a_1, \, a_2, \, a_3) \, ,
\end{align}
which is independent of the arguments $(a_4, \, a_5, \, a_6)$. This is the claim of Proposition \ref{A1linmain}. We could further specify the function $\Psi^0_1$ by means of \eqref{LinftyFACTORS}.
\subsection{Estimates for linearised problems for the variables $v$ and $\varrho$}
After these technicalities we can rely on estimates already available. We first quote \cite{solocompress}, Theorem 1.
\begin{prop}\label{solonnikov1}
Suppose that $\varrho^* \in C^{\alpha,0}(Q_T)$, $0< \alpha \leq 1$, is strictly positive, and denote $M^*(t) := \max_{Q_t} \varrho^*$, $m^*(t) := \min_{Q_t} \varrho^*$.
Let $f \in L^p(Q_T; \, \mathbb{R}^3)$ and $v^0 \in W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^3)$ with $v^0 = 0$ on $\partial \Omega$.
Then there is a unique solution $v \in W^{2,1}(Q_T; \, \mathbb{R}^3)$ to $\varrho^* \, \partial_t v - \divv \mathbb{S}(\nabla v) = f$ in $Q_T$ with the boundary conditions $v = 0$ on $S_T$ and $v(x, \, 0) = v^0(x)$ in $\Omega$. Moreover, for all $t \leq T$,
\begin{align*}
\mathscr{V}(t; \, v) \leq & C \, \phi_{5,t}^* \, (1+\sup_{s\leq t} [\varrho(s)]_{C^{\alpha}(\Omega)})^{\frac{2}{\alpha}} \, (\|f\|_{L^p(Q_t)} + \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|v\|_{L^p(Q_t)}) \, , \\
\phi_{5,t}^* := & \left(\frac{1}{\min\{1, \, m^*(t)\}}\right)^{\frac{2}{\alpha}} \, \left(\frac{M^*(t)}{m^*(t)} \right)^{\frac{p+1}{p}} \, \, .\nonumber
\end{align*}
\end{prop}
With a Gronwall argument like in Lemma \ref{A1linUMFORMsecond}, we can get rid of $\|v\|_{L^p(Q_t)}$.
\begin{coro}\label{normvmain}
Adopt the assumptions of Proposition \ref{solonnikov1}. Then there is $C$ independent on $t$, $\varrho^*$, $v^0$, $f$ and $v$, and a continuous function $\Psi_2 = \Psi_2(t, \, a_1, \, a_2, \, a_3)$, defined for all $t \geq 0$ and all $a_1, \, a_2, \, a_3 \geq 0$, such that
\begin{align*}
\mathscr{V}(t; \, v) \leq & C \, \Psi_{2,t} \, (1+ \sup_{s\leq t} [\varrho^*(s)]_{C^{\alpha}(\Omega)})^{\frac{2}{\alpha}} \, (\|f\|_{L^p(Q_t)} + \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)})\, , \nonumber\\
\Psi_{2,t} := & \Psi_2(t, \, (m^*(t)), \, M^*(t), \, \sup_{s\leq t} [\varrho^*(s)]_{C^{\alpha}(\Omega)}) \, .
\end{align*}
The function $\Psi_2$ is increasing in all its arguments and the value of $\Psi_2(0, \, a_1, \, a_2, \, a_3) = \phi_{5}^*(a_1, \,a_2)$ is independent on $a_3$.
\end{coro}
\begin{proof}
Introduce first the abbreviation $\tilde{\phi}_{5,t}^* := \phi_{5,t}^* \, (1+\sup_{s\leq t} [\varrho(s)]_{C^{\alpha}(\Omega)})^{\frac{2}{\alpha}}$. We raise the estimate in Prop. \ref{solonnikov1} to the $p-$power and obtain that
\begin{align*}
\mathscr{V}^p(t; \, v) \leq & c_p \, (\tilde{\phi}_{5,t}^*)^p \, (\|f\|_{L^p(Q_t)}^p + \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p + \int_{0}^t \|v(s)\|^p_{L^p(\Omega)} \, ds) \, .
\end{align*}
We then argue as in Corollary \ref{A1linUMFORMsecond}, using Gronwall. We let $\Psi_{2,t} := \phi_{5,t}^* \, (1 + t^{\frac{1}{p}} \, \tilde{\phi}_{5,t}^* \, e^{C \, (\tilde{\phi}_{5,t}^*)^p \, t})$, and the claim follows.
\end{proof}
We recall one further Theorem of \cite{solocompress} concerning the linearised continuity equation.
\begin{prop}\label{solonnikov2}
Suppose that $v^* \in W^{2,1}_p(Q_T; \, \mathbb{R}^3)$. Suppose that $\varrho_0 \in W^{1,p}(\Omega)$ satisfies $0 < m_0 \leq \varrho_0(x) \leq M_0 < + \infty$ in $\Omega$.
Then the problem $\partial_t \varrho + \divv(\varrho \, v^*) = 0$ in $Q_T$ with $\varrho(x, \, 0) = \varrho_0(x)$ in $\Omega$ possesses a unique solution of class $W^{1,1}_{p,\infty}(Q_T)$ for which
\begin{align*}
m_0 \, [\phi_{6,t}^*]^{-1} \leq \varrho(x, \, t) \leq M_0 \, \phi_{6,t}^* \text{ for all } (x, \, t) \in Q_T \, ,
\end{align*}
with $\phi_{6,t}^* := e^{\sqrt{3} \, \|v^*_x\|_{L^{\infty,1}(Q_t)}}$. Moreover, for all $t < T$ and $0 < \alpha < 1$,
\begin{align*}
& \|\nabla \varrho(t)\|_{L^p(\Omega)} \leq \sqrt{3} \, [\phi_{6,t}^*]^{(2+\tfrac{1}{p})\sqrt{3}} \, (\|\nabla \varrho_0\|_{L^p(\Omega)} +\sqrt{3} \, \|\varrho_0\|_{L^{\infty}(\Omega)} \, \|v_{x,x}\|_{L^{p,1}(Q_t)}) \, ,\\
& [\varrho(t)]_{C^{\alpha}(\Omega)} \leq 3^{\tfrac{\alpha}{2}}\, [\phi_{6,t}^*]^{1+\alpha} \, ( [\varrho_0]_{C^{\alpha}(\Omega)} + \sqrt{3} \, \|\varrho_0\|_{L^{\infty}(\Omega)} \, \int_0^t [v^*_x(s)]_{C^{\alpha}(\Omega)} \, ds) \, .
\end{align*}
For $\alpha < 1$, there is $c = c_{\alpha}$ such that for all $(x, \, t) \in Q_T$,
\begin{align*}
[\varrho(x)]_{C^{\alpha}(0,t)} \leq & c \, \|\varrho_0\|_{C^{\alpha}(\Omega)} \, \phi_{6,t}^*\, \Big(\|v_x^*\|_{L^{\infty,\frac{1}{1-\alpha}}(Q_t)}
+ (\|v^*\|_{L^{\infty}(Q_t)} \, \phi_{6,t}^*)^{\alpha} \, (1 + \int_0^t [v_x^*(\tau)]_{C^{\alpha}(\Omega)}\, d\tau)\Big) \, .
\end{align*}
\end{prop}
\begin{proof}
The three first estimates are stated and proved explicitly in \cite{solocompress}. In order to estimate the time H\"older norm, we invoke the formula
\begin{align*}
\varrho(x, \, t) = \varrho_0(y(0; \, t,x)) \, \exp\left(-\int_{0}^t \divv v(y(\tau; \, t,x), \, \tau) \, d\tau \right) \, .
\end{align*}
We shall use the abbreviation $F(t) :=- \int_{0}^t \divv v(y(\tau; \, t,x), \, \tau) \, d\tau$. The map $\tau \mapsto y(\tau; \, t,x)$ is the unique characteristics through $(x, \, t)$ with speed $v^*$. Recall also the formula given between numbers (15) and (16) in \cite{solocompress} via
\begin{align}\label{charactimederiv}
|\partial_t y(\tau; \, t,x)| \leq \sqrt{3} \, \|v^*\|_{L^{\infty}(Q_t)} \, \phi_{6,t}^* \, .
\end{align}
Making use of the latter, we show for $s < t$ that
\begin{align*}
|F(t) - F(s)| \leq & 3^{\frac{1}{2}} \, \int_s^t \|v_x(\tau)\|_{L^{\infty}(\Omega)} \, d\tau + \int_{0}^s |\divv v(y(\tau; \, t,x), \, \tau) - \divv v(y(\tau; \, s,x), \, \tau)| \, d\tau\\
\leq & 3^{\frac{1}{2}} \, \int_s^t \|v_x(\tau)\|_{L^{\infty}(\Omega)} \, d\tau + 3 \, \int_0^s [v_x(\tau)]_{C^{\alpha}(\Omega)} \, (\sup_{\sigma \in [s,t]} |y_t(\tau; \, \sigma, \, x)|)^{\alpha} \, d\tau \, (t-s)^{\alpha} \, \\
\leq & 3^{\frac{1}{2}} \, (t-s)^{\alpha} \, (\|v_x\|_{L^{\infty,\frac{1}{1-\alpha}}(Q_t)} \, + 3^{\frac{1}{2}+\frac{\alpha}{2}} \, (\|v^*\|_{L^{\infty}(Q_t)} \, \phi_{6,t}^*)^{\alpha} \, \int_0^t [v_x(\tau)]_{C^{\alpha}(\Omega)}\, d\tau) \, .
\end{align*}
For $s <t$, we have
\begin{align*}
\varrho(x, \, t) - \varrho(x, \, s) = (\varrho_0(y(0; \, t,x)) -\varrho_0(y(0; \, s,x)) \, e^{F(t)} + \varrho_0(y(0; \, s,x)) \, (e^{F(t)} - e^{F(s)}) \,
\end{align*}
By means of \eqref{charactimederiv}, it follows that
\begin{align*}
& | \varrho(x, \, t) - \varrho(x, \, s)| \leq \\
& [\varrho_0]_{C^{\alpha}(\Omega)} \, (\sqrt{3} \, \|v^*\|_{L^{\infty}(Q_t)} \, \phi_{6,t}^*)^{\alpha} \, (t-s)^{\alpha} \, e^{F(t)} + \|\varrho_0\|_{L^{\infty}(\Omega)} \, e^{\max\{|F(t)|, |F(s)|\}} \, |F(t)-F(s)|\\
& \leq c \, (t-s)^{\alpha} \, \phi_{6,t}^* \, \|\varrho_0\|_{C^{\alpha}(\Omega)} \times \\
& \qquad \times (\|v_x\|_{L^{\infty,\frac{1}{1-\alpha}}(Q_t)} + (\|v^*\|_{L^{\infty}(Q_t)} \, \phi_{6,t}^*)^{\alpha} \, (1+ \int_0^t [v_x(\tau)]_{C^{\alpha}(\Omega)}\, d\tau )) \, .
\end{align*}
\end{proof}
We also need to restate these estimates as to be later able to quote them more conveniently.
\begin{coro}\label{normrhomain}
We adopt the assumptions of Proposition \ref{solonnikov2}. Define $m(t) := \inf_{Q_t} \varrho$ and $M(t) := \sup_{Q_t} \varrho$. We choose $\beta = 1-\tfrac{3}{p}$. Then there are functions $\Psi_3, \, \Psi_4,\Psi_5$ of the variables $t$, $\frac{1}{m_0}$, $M_0$, $\|\nabla \varrho_0\|_{L^{p}(\Omega)}$ and $\mathscr{V}(t; \, v^*)$, in that order, such that
\begin{align*}
\frac{1}{m(t)} \leq & \Psi_3(t, \, m_0^{-1}, \, \mathscr{V}(t; \, v^*)), \quad M(t) \leq \Psi_3(t,\, M_0, \, \mathscr{V}(t; \, v^*)) \, ,\\
\|\nabla \varrho\|_{L^{p,\infty}(Q_t)} \leq & \Psi_4(t, \, m_0^{-1}, \,M_0,\, \|\nabla \varrho_0\|_{L^{p}(\Omega)}, \, \mathscr{V}(t; \, v^*)) \, ,\\
[\varrho]_{C^{\beta,\frac{\beta}{2}}(Q_t)} \leq & \Psi_5(t, \, m_0^{-1}, \,M_0,\, \|\nabla \varrho_0\|_{L^{p}(\Omega)}, \, \mathscr{V}(t; \, v^*)) \, .
\end{align*}
For $i=3,4,5$, the function $\Psi_i$ is continuous and increasing in all variables. Moreover the expression $\Psi_i(0, \, a_1, \ldots, a_4) = \Psi^0_i(a_1, \, a_2, \,a_3)$ is independent on the last variable $a_4 = \mathscr{V}(t; \, v^*)$. (The function $\Psi_3$ depends only on $t$, $a_1$ or $a_2$ and $a_4$).
\end{coro}
\begin{proof}
The Sobolev embedding theorems imply that $\|v^*_x\|_{L^{\infty}(\Omega)} \leq C \, \|v^*\|_{W^{2,p}(\Omega)}$. It therefore follows from H\"older's inequality that
$\|v^*_x\|_{L^{\infty,1}(Q_t)} \leq C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}$.
Thus $\phi_{6,t}^* \leq \exp(C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)})$. Invoking the Proposition \ref{solonnikov2},
\begin{align*}
\frac{1}{m(t)} \leq \frac{1}{m_0} \, \exp(\sqrt{3} \, \|v^*_x\|_{L^{\infty,1}(Q_t)}) \leq & \frac{1}{m_0} \, \exp(C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}) \\
:= & \Psi_3(t, \, m_0^{-1}, \, \mathscr{V}(t; \, v^*)) \, .
\end{align*}
Thus, choosing $\Psi_3(t, \, a_1,\, a_4) := a_1 \, \exp(C \, t^{1-\frac{1}{p}} \,a_4)$, we have $ \frac{1}{m(t)} \leq \Psi_3$ and $\Psi_3(0, \, a_1, \, a_4) = a_1$ is independent on $a_4$. Similarly $M(t) \leq M_0 \, \exp(C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)})$. Moreover, again due to the Proposition \ref{solonnikov2},
\begin{align*}
& \|\nabla \varrho(t)\|_{L^p(\Omega)} \leq \sqrt{3} \, [\phi_{6,t}^*]^{(2+\frac{1}{p})\sqrt{3}} \, (\|\nabla \varrho_0\|_{L^p(\Omega)} +\sqrt{3} \, \|\varrho_0\|_{L^{\infty}(\Omega)} \, \|v_{x,x}\|_{L^{p,1}(Q_t)})\\
& \leq \sqrt{3} \, [\phi_{6,t}^*]^{(2+\frac{1}{p})\sqrt{3}} \, (\|\nabla \varrho_0\|_{L^p(\Omega)} + \|\varrho_0\|_{L^{\infty}(\Omega)} \, \sqrt{3} \, t^{1-\frac{1}{p}} \, \|v_{x,x}\|_{L^{p}(Q_t)})\\
& \leq \sqrt{3} \, \exp(C \, (2+\frac{1}{p})\sqrt{3} \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}) \, (\|\nabla \varrho_0\|_{L^p(\Omega)} + \|\varrho_0\|_{L^{\infty}(\Omega)} \, \sqrt{3} \, t^{1-\frac{1}{p}} \, \|v_{x,x}\|_{L^{p}(Q_t)}) \, .
\end{align*}
We define $\Psi_4(t, \, a_1, \ldots, a_4) := \exp(C \, (2+\tfrac{1}{p})\sqrt{3} \, t^{1-\tfrac{1}{p}} \, a_4) \, (a_3 + a_2 \, \sqrt{3} \, t^{1-\tfrac{1}{p}} \, a_4)$. Then we clearly can show that $ \|\nabla \varrho(t)\|_{L^p(\Omega)} \leq\sqrt{3} \, \Psi_4(t, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p}, \, \mathscr{V}(t; \, v^*))$. As required, the value of $\Psi_4(0, \, a_1, \ldots,a_4) = a_3$ is independent on $a_4$.
For $\alpha = 1-\tfrac{3}{p}$, the Sobolev embedding yields $\|v^*_x(t)\|_{C^{\alpha}(\Omega)} \leq C \, \|v^*(t)\|_{W^{2,p}(\Omega)}$. Thus
\begin{align*}
& [\varrho(t)]_{C^{\alpha}(\Omega)} \leq 3^{\frac{\alpha}{2}}\, [\phi_{6,t}^*]^{1+\alpha} \, ( [\varrho_0]_{C^{\alpha}(\Omega)} + \sqrt{3} \, \|\varrho_0\|_{L^{\infty}(\Omega)} \, \int_0^t [v^*_x(s)]_{C^{\alpha}(\Omega)} \, ds) \,\\
& \leq 3^{\frac{\alpha}{2}}\, \exp((1+\alpha) \, C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}) \, ( [\varrho_0]_{C^{\alpha}(\Omega)} + \sqrt{3} \, \|\varrho_0\|_{L^{\infty}(\Omega)} \, C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_{p}(Q_t)}) \, .
\end{align*}
Moreover $ [\varrho_0]_{C^{\alpha}(\Omega)} \leq C \, \|\nabla \varrho_0\|_{L^p(\Omega)}$. Thus for $\Psi^{(1)}_5(t, \, a_1, \ldots,a_4) = \exp((1+\alpha)\, C \, t^{1-\frac{1}{p}} \, a_4) \, (a_3 + a_2 \, a_4\, t^{1-\frac{1}{p}})$, we find $[\varrho(t)]_{C^{\alpha}(\Omega)} \leq C \,\Psi_5^{(1)}(t, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p}, \, \mathscr{V}(t; \, v^*))$. Note that $\Psi^{(1)}_5(0) = a_3$. For $x \in \Omega$ and $\alpha < 1$, we can invoke Proposition \ref{solonnikov2} to estimate $[\varrho(x)]_{C^{\alpha}(0,t)}$.
If $\alpha < 1-\frac{1}{p}$, H\"older's inequality implies for $r = \frac{(1-\alpha) \, p-1}{(1-\alpha) \, p} > 0$
\begin{align*}
\|v_x^*\|_{L^{\infty,\frac{1}{1-\alpha}}(Q_t)} \leq C \, \|v^*\|_{L^{\frac{1}{1-\alpha}}(0,t; \, W^{2,p}(\Omega))} \leq C \, t^r \, \|v^*\|_{W^{2,1}_p(Q_t)} \, .
\end{align*}
Moreover, for all $\alpha \leq 1-\frac{3}{p}$, we have $\int_0^t [v_x^*(\tau)]_{C^{\alpha}(\Omega)}\, d\tau \leq C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}$. For $\alpha = 1-\frac{3}{p}$ we define $r = \frac{2}{3}$ and we see that
\begin{align*}
& [\varrho(x)]_{C^{\alpha}(0,t)} \leq c \, \|\varrho_0\|_{C^{\alpha}(\Omega)} \, \exp(C \, t^{1-\frac{1}{p}} \,\|v^*\|_{W^{2,1}_p(Q_t)}) \, \Big(C \, t^{\frac{2}{3}} \, \|v^*\|_{W^{2,1}_p(Q_t)}\\
& \phantom{[\varrho(x)]_{C^{\alpha}(0,t)} } \quad + \|v^*\|_{L^{\infty}(Q_t)}^{\alpha} \, \exp(C \, \alpha\, t^{1-\frac{1}{p}} \,\|v^*\|_{W^{2,1}_p(Q_t)}) \, (1+ C \, t^{1-\frac{1}{p}} \, \|v^*\|_{W^{2,1}_p(Q_t)}) \Big) \, .
\end{align*}
We can estimate $\|v^*\|_{L^{\infty}(Q_t)} \leq \sup_{s \leq t} \|v^*\|_{W^{2-\frac{2}{p}}_p(\Omega)}$ and $\|\varrho_0\|_{C^{\alpha}(\Omega)} \leq M_0 + C \, \|\nabla \varrho_0\|_{L^p(\Omega)}$. Then we define
\begin{align*}
& \Psi_5^{(2)}(t, \, a_1, \ldots, a_4) \\
& \quad = (a_3+ C \, a_2) \, \exp(C\, t^{1-\tfrac{1}{p}} \, a_4) \, (t^{\frac{2}{3}} \, a_4 + a_4^\alpha \, \exp( C \, \alpha \, t^{1-\tfrac{1}{p}} \,a_4) \, (1 + C \, t^{1-\tfrac{1}{p}} \, a_4)) \, ,
\end{align*}
and find that $ [\varrho(x)]_{C^{\alpha}(0,t)} \leq \Psi^{(2)}_5$. The value $\Psi^{(2)}_5(0, \, a_1,\ldots,a_4) = (a_3 + C \, a_2) \, a_4^{\alpha}$ is not yet independent of $a_4$. But for arbitrary $0< \alpha^{\prime} < 1-\tfrac{3}{p}$, it also follows that $[\varrho(x)]_{C^{\alpha^{\prime}}(0,t)} \leq C \, t^{1-\tfrac{3}{p}-\alpha^{\prime}} \, \Psi_5^{(2)}$.
The function $\Psi^{(3)}_5(t, \, a_1, \ldots, a_4) := t^{1-\frac{3}{p}-\alpha^{\prime}} \, \Psi_5^{(2)}(t, \, a_1, \ldots,a_4)$ now satisfies $\Psi_5^{(3)}(0, \, a) = 0$ independently on $a_4$. Moreover $[\varrho(x)]_{C^{\alpha^{\prime}}(0,t)} \leq \Psi_5^{(3)}$.
Thus for $\beta = 1-\tfrac{3}{p}$, we have $[\varrho]_{C^{\beta,\frac{\beta}{2}}(Q_T)} \leq C \, (\Psi_5^{(1)} + \Psi_5^{(3)}) =: C \, \Psi_5$. Clearly $\Psi_5(0, \, a_1,\ldots, a_4) = \Psi_5^{(1)}(0, \, a_1, \ldots,a_4) = a_3$. We are done.
\end{proof}
\section{The continuity estimate for $\mathcal{T}$}\label{contiT}
We now want to combine the Propositions \ref{A1linmain} and \ref{solonnikov1} with the linearisation of the continuity equation in Proposition \ref{solonnikov2} to study the fixed point map $\mathcal{T}$ described at the beginning of Section \ref{twomaps} and defined by the equations \eqref{linearT1}, \eqref{linearT2}, \eqref{linearT3}. For a given $v^* \in W^{2,1}_p(Q_T; \, \mathbb{R}^3)$ and $q^* \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$, we introduce the notation $\mathscr{V}^*(t) := \mathscr{V}(t; \, q^*) + \mathscr{V}(t; \, v^*)$. To begin with, we need to control the growth of the lower order terms in \eqref{linearT1}, \eqref{linearT2}, \eqref{linearT3}.
\begin{lemma}\label{RightHandControl}
Consider the compound $\Gamma := Q \times \mathbb{R}^{N-1} \times \mathbb{R}_+ \times \mathbb{R}^3 \times \mathbb{R}^{N-1\times 3} \times \mathbb{R}^{3\times 3} \times \mathbb{R}^{3}$, and a function $G$ defined on $\Gamma$. For $u^* = (q^*, \, \varrho^*, \, v^*) \in \mathcal{X}_{T,+}$ we define $G^* := G(x, \, t, \, u^*,\, D^1_xu^*)$. Assume that the function $G$ is satisfying for all $t \leq T$ and $x \in \Omega$ the growth conditions
\begin{align*}
& | G(x, \, t, \, u^*,\, D^1_xu^*)| \\
& \quad \leq c_1((m^*(t))^{-1}, \, |u^*|) \, \Big(|\bar{G}(x, \, t)| + |q^*_x|^{r_1} + |v^*_x|^{r_1} + |\bar{H}(x ,\, t)| \, (|\varrho^*_x| + |q^*_x|)\Big) \, ,
\end{align*}
in which $1\leq r_1 < 2-\frac{3}{p} + \frac{3}{(5-p)^+}$ and $\bar{G} \in L^p(Q_T)$, $\bar{H} \in L^{\infty,p}(Q_T)$ are arbitrary and $c_1$ is a continuous, increasing function of two positive arguments. Then, there is a continuous function $\Psi_{G} = \Psi_G(t, \, a_1, \ldots, a_5)$ defined for all non-negative arguments such that
\begin{align*}
\|G^*\|_{L^p(Q_t)} \leq \Psi_{G}(t, \, (m^*(t))^{-1}, \, M^*(t), \, \|(q^*(0), \, v^*(0))\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)}, \, \mathscr{V}^*(t)) \, .
\end{align*}
The function $\Psi_{G}$ is increasing in all arguments and $\Psi_{G}(0, \, a_1,\ldots,a_5) = 0$ for all $a \in [\mathbb{R}_+]^5$.
\end{lemma}
\begin{proof}
With the abbreviation $c_1^* := c_1((m^*(t))^{-1}, \, \|q^*\|_{L^{\infty}(Q_t)} + \|v^*\|_{L^{\infty}(Q_t)} + M^*(t))$, we have by assumption
\begin{align*}
\|G^*\|_{L^p(Q_t)} \leq & c_1^* \, (\|\bar{G}\|_{L^p(Q_t)} + \||q^*_x| + |v^*_x|\|_{L^{pr_1}(Q_t)}^{r_1} + \|(|\varrho^*_x| + |q^*_x|) \, \bar{H}\|_{L^p(Q_t)})\\
\leq & c_1^* \, (\|\bar{G}\|_{L^p(Q_t)} + \||q^*_x| + |v^*_x|\|_{L^{pr_1}(Q_t)}^{r_1} + \||q^*_x| + |\varrho^*_x|\|_{L^{p,\infty}(Q_t)} \, \|\bar{H}\|_{L^{\infty,p}(Q_t)}) \, .
\end{align*}
Thanks to the Remark \ref{parabolicspace}, the gradients $q^*_x, \, v^*_x$ belong to $L^r(Q_T)$ for $r = 2p-3+ \frac{3p}{(5-p)^+}$, and for all $t \leq T$ the inequality $\|q^*_x\|_{L^r(Q_t)} \leq \|q^*_x\|_{L^{\infty,2p-3}(Q_t)}^{\frac{2p-3}{r}} \, \|q^*_x\|_{L^{\frac{3p}{(5-p)^+},\infty}(Q_t)}^{\frac{3p}{r(5-p)^+}}$ is valid.
Thus, if $r_1 \, p < r$, we obtain that $\|q^*_x\|_{L^{pr_1}(Q_t)} \leq C \, t^{1-\frac{pr_1}{r}} \, |\Omega|^{1-\frac{pr_1}{r}} \, \mathscr{V}(t; \, q)$.
Moreover, since $p < \frac{3p}{(5-p)^+}$, we have $\|q^*\|_{L^{p, \infty}(Q_t)} \leq C \, \sup_{s \leq t} \|q^*(s)\|_{W^{2-\frac{2}{p}}(\Omega)}$ with $C$ depending only on $\Omega$. The terms containing $v^*_x$ are estimated the same way.
Overall, we obtain for $G^*$
\begin{align*}
\|G^*\|_{L^p(Q_t)} \leq C_1^* \, (&\|\bar{G}\|_{L^p(Q_t)} + t^{r_1\, (1-\frac{pr_1}{r})} \, [\mathscr{V}^*(t)]^{r_1}+\|\bar{H}\|_{L^{\infty,p}(Q_t)} \, (\|\varrho^*_x\|_{L^{p,\infty}(Q_t)} + \mathscr{V}^*(t)) ) \, ,
\end{align*}
in which $C_1^* = C_1((m^*(t))^{-1}, \, \|q^*\|_{L^{\infty}(Q_t)} + \|v^*\|_{L^{\infty}(Q_t)} + M^*(t))$. Invoking the Lemma \ref{HOELDERlemma} to estimate $\|q^*\|_{L^{\infty}(Q_t)} \leq \|q^0\|_{L^{\infty}(\Omega)} + t^{\gamma} \, \mathscr{V}^*(t)$ and the same for $v^*$, we see that this estimate possesses the structure claimed by the Lemma.
\end{proof}
We are now ready to establish the a final estimate that allows to obtain the self-mapping property.
\begin{prop}\label{estimateself}
There is a continuous function $\Psi_8 = \Psi_8(t, \, a_1,\ldots, a_5)$ on $[0, \, + \infty[ \times \mathbb{R}^5_+$, increasing in all arguments, such that for the pair $(q, \, v) = \mathcal{T}(q^*, \, v^*)$ the following estimate is valid:
\begin{align*}
\mathscr{V}(t) \leq \Psi_8(t, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \mathscr{V}^*(t)) \, .
\end{align*}
Moreover, for all $\eta \geq 0$
\begin{align*}
& \Psi_8\Big(0, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)},\, \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \eta\Big) \\
& =
\Psi^0_1(m_0^{-1}, \, M_0, \, \|q^0\|_{C^{1-\frac{3}{p}}(\Omega)}) \, (1+\|\nabla \varrho_0\|_{L^p(\Omega)})^{\tfrac{2p}{p-3}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}\\
& +
\left( \frac{1}{\min\{1, \, m_0\}} \right)^{\tfrac{2p}{p-3}}\, \left( \frac{M_0}{m_0} \right)^{\tfrac{p+1}{p}} \, (1+\|\nabla \varrho_0\|_{L^p(\Omega)})^{\tfrac{2p}{p-3}} \, \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} \, .
\end{align*}
\end{prop}
\begin{proof}
We first apply the Proposition \ref{A1linmain} with $\varrho^* = \varrho$. It follows that
\begin{align*}
\mathscr{V}(t; \, q) \leq & C \, \Psi_1(t, \, m(t)^{-1}, \, M(t), \, \|q^*(0)\|_{ C^{\beta}(\Omega)}, \,
\mathscr{V}(t; \, q^*), \, [\varrho]_{C^{\beta,\frac{\beta}{2}}(Q_t)}, \, \|\nabla \varrho\|_{L^{p,\infty}(Q_t)}) \, \times\\
& \times ((1+[\varrho]_{C^{\beta,\frac{\beta}{2}}(Q_t)})^{\tfrac{2}{\beta}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|g^*\|_{L^p(Q_t)}) \, .
\end{align*}
Due to the Corollary \ref{normrhomain} and the choice $\varrho^* = \varrho$, we have for $\beta := 1-\frac{3}{p}$
\begin{align*}
\max\{ \frac{1}{m(t)}, \, M(t)\} \leq & \Psi_3(t, \, \max\{m_0^{-1}, \, M_0\}, \, \mathscr{V}(t; \, v^*)) =: \Psi_3(t, \ldots) \, , \\
\|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)} \leq & \Psi_4(t, \, m_0^{-1}, \, M_0, \, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \mathscr{V}(t; \, v^*)) =: \Psi_4(t, \ldots)\\
[\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} \leq & \Psi_5(t, \, m_0^{-1}, \, M_0, \, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \mathscr{V}(t; \, v^*)) =: \Psi_5(t, \ldots) \, .
\end{align*}
Moreover, we can apply the Lemma \ref{RightHandControl} to the right-hand defined in \eqref{A1right}. (Choose $G = g$, $r_1 = 1$, $\bar{H}(x, \, t) := |\tilde{b}(x, \, t)|$ and $\bar{G}(x, \, t) = |\tilde{b}_x(x, \, t)|$.) It follows that
\begin{align*}
\|g^*\|_{L^p(Q_t)} \leq & \Psi_g( t, \,\Psi_3(t, \, \ldots), \, \Psi_3(t, \, \ldots), \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)},\, \Psi_4(t, \, \ldots), \, \mathscr{V}^*(t))\,\\
=: & \Psi_g(t, \ldots) \, .
\end{align*}
Combining all these estimates we can bound the quantity $ \mathscr{V}(t; \, q) $ by the function
\begin{align*}
\Psi_8^{(1)} := & \Psi_1( t, \, \Psi_3(t, \, \ldots) , \,\Psi_3(t, \, \ldots), \, \|q^0\|_{C^{\beta}(\Omega)}, \,
\mathscr{V}(t; \, q^*), \, \Psi_5(t, \,\ldots), \, \Psi_4(t, \, \ldots)) \, \times \\
& \times \Big( (1+\Psi_5(t, \, \ldots))^{\tfrac{2}{\beta}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \Psi_g(t, \ldots) \Big) \, .
\end{align*}
Since we can apply the inequalities $\mathscr{V}(t; \, v^*), \, \mathscr{V}(t; \, q^*) \leq \mathscr{V}(t)$, we reinterpret the latter expression as a function $\Psi^{(1)}_{8} := \Psi^{(1)}_8(t, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \mathscr{V}(t))$.
Moreover, it $t = 0$, we can use the estimates proved in the Proposition \ref{A1linmain} and the Corollaries \ref{normvmain} and \ref{normrhomain}. Recall in particular that $\Psi_1(0, \, a_1, \ldots, a_6) = \Psi_1^0(a_1, \, a_2, \, a_3)$. Moreover, $\Psi_3(t, \, M_0, \, \eta) = M_0$. Thus, since $\Psi_5(0, \, a_1, \, a_2, \, a_3) = a_3$, and $\Psi_g(0, \ldots) = 0$ (see Lemma \ref{RightHandControl}) we can compute that
\begin{align}\label{formulatpsi6null}
& \Psi^{(1)}_8(0, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \mathscr{V}(t)) \\
& \quad= \Psi^0_1(m_0^{-1}, \, M_0, \, \|q^0\|_{C^{1-\frac{3}{p}}(\Omega)}) \, (1+\|\nabla \varrho_0\|_{L^p(\Omega)})^{\tfrac{2p}{p-3}} \, \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} \nonumber\, .
\end{align}
We next apply the Corollary \ref{normvmain} with $\varrho^* = \varrho$ and $f = f^*$, to obtain that
\begin{align*}
\mathscr{V}(t; \, v) \leq & C \, \Psi_2(t, \, m(t)^{-1}, \, M(t), \, \sup_{s \leq t} [\varrho(s)]_{C^{\alpha}(\Omega)}) \, (1+\sup_{s \leq t} [\varrho(s)]_{C^{\alpha}(\Omega)})^{\frac{2}{\alpha}}\, \times\\
& \times ( \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}+ \|f^*\|_{L^p(Q_t)}) \, .
\end{align*}
We apply the Lemma \ref{RightHandControl} to $G = f$ (recall \eqref{A3right}, and choose $r_1 = 1$, $|\bar{H}(x, \, t)| = 1$ and $\bar{G}(x, \, t) := |\tilde{b}(x, \, t)| + |\bar{b}(x, \, t)|$ in the statement of Lemma \ref{RightHandControl}). For $\alpha = 1-\frac{3}{p}$, we estimate $\mathscr{V}(t; \, v)$ above by
\begin{align*}
\Psi_2(& t, \, \Psi_3(t, \, \ldots) \,, \Psi_3(t, \, \ldots), \, \Psi_5(t, \, \ldots)) \, (1+ \Psi_5(t, \, \ldots))^{\frac{2}{\alpha}}\, (\|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}+ \Psi_f(t,\ldots) ) \, .
\end{align*}
We reinterpret this function as a $\Psi^{(2)}_{8,t}$ of the same arguments, and we note that
\begin{align*}
& \Psi^{(2)}_8(0, \, m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)}, \, \eta) \\
& = \Psi^0_2(m_0^{-1}, \, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}) \, (1+\|\nabla \varrho_0\|_{L^p(\Omega)})^{\tfrac{2p}{p-3}} \, \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)} \\
& = \left( \frac{1}{\min\{1, \, m_0\}} \right)^{\tfrac{2p}{p-3}}\, \left( \frac{M_0}{m_0} \right)^{\tfrac{p+1}{p}} \, (1+\|\nabla \varrho_0\|_{L^p(\Omega)})^{\frac{2p}{p-3}} \, \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}
\end{align*}
The claim follows.
\end{proof}
\begin{prop}\label{ContiEststate}
We adopt the assumptions of the Theorem \ref{MAIN2}.
For a given pair $(q^*, \, v^*) \in \mathcal{Y}_T$, we define a map $\mathcal{T}(q^*, \, v^*) = (q, \, v)$ via solution to the equations \eqref{linearT1}, \eqref{linearT2}, \eqref{linearT3} with homogeneous boundary conditions \eqref{lateralq}, \eqref{lateralv} and initial conditions $(q^0, \, \varrho_0, \, v^0)$. Then, there are $0 < T_0 \leq T$ and $\eta_0 > 0$ depending on the data via the vector $R_0 := (m_0^{-1},\, M_0, \, \|\nabla \varrho_0\|_{L^p(\Omega)}, \, \|(q^0, \, v^0)\|_{W^{2-\frac{2}{p}}_p(\Omega)})$ such that $\mathcal{T}$ maps the ball with radius $\eta_0$ in $\mathcal{Y}_{T_0}$ into itself.
\end{prop}
\begin{proof}
We apply the Lemma \ref{selfmapT} with $\Psi(t, \, R_0, \, \eta) := \Psi_8(t, \, R_0, \, \eta)$ from Lemma \ref{estimateself}, and the claim follows.
\end{proof}
\section{Fixed point argument and proof of the theorem on short-time well-posedness}\label{FixedPointIter}
Starting from $(q^1, \, v^1) = 0$, we consider a fixed point iteration $(q^{n+1}, \, v^{n+1}) := \mathcal{T} \, (q^{n}, \, v^{n})$ for $ n \in \mathbb{N}$.
Recall that this means first considering $\varrho^{n+1} \in W^{1,1}_{p,\infty}(Q_T)$ solution to
\begin{align*}
\partial_t \varrho^{n+1} + \divv(\varrho^{n+1} \, v^n) = 0 \text{ in } Q_T\, , \quad \varrho^{n+1}(x, \, 0) = \varrho_0(x) \text{ in } \Omega \, .
\end{align*}
Then we introduce $(q^{n+1}, \, v^{n+1}) \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}) \times W^{2,1}_p(Q_T; \, \mathbb{R}^{3})$ via solution in $Q_{T}$ to
\begin{align*}
& R_q(\varrho^{n+1}, \, q^n) \, \partial_t q^{n+1} - \divv (\widetilde{M}(\varrho^{n+1}, \, q^n) \, \nabla q^{n+1}) = - \divv (\widetilde{M}(\varrho^{n+1}, \, q^n) \, \tilde{b}(x, \,t) ) \\
& \quad +(R_{\varrho}(\varrho^{n+1}, \, q^n) \, \varrho^{n+1} - R(\varrho^{n+1}, \, q^n)) \, \divv v^n - R_q(\varrho^{n+1}, \, q^n) \, v^n \cdot \nabla q^n + \tilde{r}(\varrho^{n+1}, \, q^n)\, , \nonumber\\[0.2cm]
& \varrho^{n+1} \, \partial_t v^{n+1} - \divv \mathbb{S}(\nabla v^{n+1}) = - \nabla P(\varrho^{n+1}, \, q^n) - \varrho^{n+1} \, (v^n \cdot \nabla) v^n \nonumber\\
& \quad \phantom{\varrho^{n+1} \, \partial_t v^{n+1} - \divv \mathbb{S}(\nabla v^{n+1}) = \, } + \tilde{b}(x, \,t) \cdot R(\varrho^{n+1}, \, q^n) + \varrho^{n+1}\, \bar{b}(x, \,t) \, .
\end{align*}
with boundary conditions $\nu \cdot \nabla q^{n+1} = 0$, $v^{n+1} = 0$ on $S_T$ and initial data $q^{n+1}(x, \, 0) = q^0(x)$ and $v^{n+1}(x, \, 0) = v^0(x)$ in $\Omega$.
Recalling \eqref{Vfunctor}, we define $\mathscr{V}^{n+1}(t) := \mathscr{V}(t; \, q^{n+1}) + \mathscr{V}(t; \, v^{n+1})$. Since obviously $\mathscr{V}^1(t) \equiv 0$, the Prop.
\ref{ContiEststate} implies the existence of parameters $T_0, \, \eta_0 > 0$ such that there holds uniform estimates
\begin{align}\label{uniform}
\sup_{n \in \mathbb{N}} \mathscr{V}^{n}(T_0) \leq \eta_0 \, , \quad \sup_{n \in \mathbb{N}} \|\varrho_{n}\|_{W^{1,1}_{p,\infty}(Q_{T_0})} \leq C_0 \, .
\end{align}
In the Theorem \ref{iter} below, we obtain that the fixed-point iteration yields strongly convergence subsequences in $L^2(Q_{t,t+t_1})$ for the components of $q^n$, $\varrho_n$ and $v^n$ and the gradients $q^n_x$ and $v^n_x$. Here $0 < t_1 \leq T_0$ is a fixed number and $t \in [0, \, T_0-t_1]$ is arbitrary. Thus, we obtain the convergence in $L^2(Q_{T_0})$ of these functions. The passage to the limit in the approximation scheme is then a straightforward exercise, since we can rely on a uniform bound in $\mathcal{X}_{T_0}$. This step shall therefore be spared.
We next prove sufficient convergence properties of the sequence $\{(q^{n}, \, \varrho^n, \, v^n)\}_{n\in \mathbb{N}}$ by means of contractivity estimates in a lower--order space. This estimate also guarantees the uniqueness. The proof is unfortunately lengthy due to the complex form of the PDE system, but it is elementary in essence and might be skipped.
\begin{theo}\label{iter}
For $n \in \mathbb{N}$, we define
\begin{align*}
r^{n+1} & := q^{n+1} - q^n, \quad \sigma^{n+1} := \varrho^{n+1} - \varrho^{n}, \quad w^{n+1} := v^{n+1} - v^n\\
e^{n+1} & := |r^{n+1}| + |w^{n+1}| \, .
\end{align*}
Then there are $k_0, \, p_0 > 0$ and $0 < t_1 \leq T_0$ such that for all $t \in [0, \, T_0 -t_1]$, the quantity
\begin{align*}
E^{n+1}(t) := & k_0 \, \sup_{\tau \in [t, \, t+t_1]} ( \|e^{n+1}(\tau)\|_{L^2(\Omega)}^2 + \|\sigma^{n+1}(\tau)\|_{L^2(\Omega)}^2) \\
& + p_0 \, \int_{Q_{t,t+t_1}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2) \, dxd\tau
\end{align*}
satisfies $ E^{n+1}(t) \leq \frac{1}{2} \, E^{n}(t)$ for all $n \in \mathbb{N}$.
\end{theo}
\begin{proof}
To be shorter, denote $R^n := R(\varrho^{n+1}, \, q^n)$, $\widetilde{M}^n := \widetilde{M}(\varrho^{n+1}, \, q^n)$, $P^n := P(\varrho^{n+1}, \, q^n)$. For simplicity, we also define $g^n := (R_{\varrho}^n \, \varrho^{n+1} - R^n) \, \divv v^n - R_q^n \, v^n \cdot \nabla q^n + \tilde{r}(\varrho^{n+1}, \, q^n)$.
The differences $r^{n+1}$, $\sigma^{n+1}$ and $w^{n+1}$ solve
\begin{align}\label{difference}
& R_q^n \partial_t r^{n+1} - \divv (\widetilde{M}^n \, \nabla r^{n+1}) = \\
& \quad + g^n - g^{n-1} + (R_q^{n-1}-R_q^n) \, \partial_t q^n-\divv((\widetilde{M}^{n-1} -\widetilde{M}^n) \, (\nabla q^n - \tilde{b}(x, \, t)) \, ,\nonumber\\
& \label{difference2} \partial_t \sigma^{n+1} + \divv (\sigma^{n+1} \, v^n + \varrho_n \, w^n) = 0 \, ,\\
\label{difference3}
& \varrho^{n+1} \, \partial_t w^{n+1} - \divv \mathbb{S}(\nabla w^{n+1}) = (R^n - R^{n-1}) \cdot \tilde{b}(x,t)- \nabla (P^n-P^{n-1}) \\
& - \sigma^{n+1} \, [\partial_t v^n + (v^{n} \cdot \nabla) v^n - \bar{b}(x, \, t)] - \varrho_{n} \, [(w^{n} \cdot \nabla )v^n +(v^{n-1} \cdot \nabla )w^n] \nonumber\, .
\end{align}
together with the boundary conditions $\nu \cdot \nabla r^{n+1} = 0$ and $w^{n+1} = 0$ on $S_{T_0}$ and homogeneous initial conditions. We multiply in \eqref{difference} with $r^{n+1}$ and make use of the formula
\begin{align*}
\frac{1}{2} \, \partial_t (R_q^n \, r^{n+1} \cdot r^{n+1}) = R_q^n \partial_t r^{n+1} \cdot r^{n+1}+ \frac{1}{2} \, \partial_t R^n_q \, r^{n+1} \cdot r^{n+1} \, .
\end{align*}
We introduce the abbreviation $a^n(r^{n+1}, \, r^{n+1}) := \frac{1}{2} \, R_q^n \, r^{n+1} \cdot r^{n+1}$. After integration over $\Omega$, and using the Gauss divergence theorem, we obtain that
\begin{align*}
& \frac{d}{dt}\, \int_{\Omega} a^n(r^{n+1}, \, r^{n+1}) \, dx + \int_{\Omega} \widetilde{M}^n \nabla r^{n+1} \cdot \nabla r^{n+1} \, dx \\
& \quad = \int_{\Omega} [g^n - g^{n-1} + (R_q^{n-1}-R_q^n) \, \partial_t q^n] \cdot r^{n+1} \, dx\\
& \qquad + \int_{\Omega} (\widetilde{M}^{n-1} -\widetilde{M}^n) \, (\nabla q^n-\tilde{b}) \cdot \nabla r^{n+1} \, dx + \int_{\Omega} \frac{1}{2} \, \partial_t R^n_q \, r^{n+1} \cdot r^{n+1} \, dx \, .
\end{align*}
On the interval $[0, \, T_0]$, the \emph{a priori} bounds \eqref{uniform} ensure that $\widetilde{M}^n = \widetilde{M}(\varrho^{n+1}, \, q^n)$ has a smallest eigenvalue strictly bounded away from zero. Thus $\widetilde{M}^n \nabla r^{n+1} \cdot \nabla r^{n+1} \geq \lambda_0 \, |\nabla r^{n+1}|^2$. Invoking the Young inequality and standard steps
\begin{align}\label{energy}
& \frac{d}{dt}\, \int_{\Omega} a^n(r^{n+1}, \, r^{n+1}) \, dx + \frac{\lambda_0}{2} \, \int_{\Omega} |\nabla r^{n+1}|^2 \, dx \nonumber\\
& \leq \int_{\Omega} [|g^n - g^{n-1}| + |R_q^{n-1}-R_q^n|\, |\partial_t q^n|] \, |r^{n+1}| \, dx\nonumber\\
& +\frac{1}{2\lambda_0} \, \int_{\Omega} |\widetilde{M}^{n-1} -\widetilde{M}^n|^2 \, (|\nabla q^n|^2+|\tilde{b}|^2) \, dx + \int_{\Omega} \frac{1}{2} \, |\partial_t R^n_q|\, |r^{n+1}|^2 \, dx \, .
\end{align}
We want to estimate the differences $g^n - g^{n-1}$. To do it shorter, we shall denote $K_0$ a generic number depending possibly on $\inf_{n\in \mathbb{N}, \, (x,t) \in Q_{T_0}} \varrho_n(x,t)$ and on $\sup_{n\in\mathbb{N}} \|(q^n, \, \varrho_n, \, v^n)\|_{L^{\infty}(Q_{T_0})}$. These quantities are bounded independently on $n$ due to the choice of $T_0$; $K_0$ might moreover depend on the $C^2-$norm of the maps $R$ and $\widetilde{M}$ over the range of $(\varrho_n, \, q^n)$ on $Q_{T_0}$. This range is contained in a compact $\mathcal{K}$ of $\mathbb{R}_+ \times \mathbb{R}^{N-1}$. Thus $|R(\varrho^{n+1}, \, q^n) -R(\varrho_{n}, \, q^{n-1})| \leq \|R\|_{C^2(\mathcal{K})} \, (|\sigma^{n+1}| + |r^n|) \leq K_0 \, (|\sigma^{n+1}| + |r^n|)$. By means of these reasoning, we readily show that
\begin{align*}
|g^n - g^{n-1}| \leq K_0 \, \Big[(1+|v^n_x| + |v^n| \, |q^n_x|) \, (|\sigma^{n+1}| + |r^n|) + |w^n_x| + |q^n_x| \, |w^n| + |v^n| \, | r_x^n|\Big] \, .
\end{align*}
Similarly we estimate $| \widetilde{M}^{n-1} -\widetilde{M}^n|^2 \leq K_0 \, (|\sigma^{n+1}|^2 + |r^n|^2)$ and $|\partial_t R^n_q| \leq K_0 \, (|\varrho^{n+1}_t| + |q^n_t|)$.
We rearrange terms, and we recall that $e^n := |r^n|+ |w^n|$. From \eqref{energy}, we obtain the estimate
\begin{align}\label{energy2}
& \frac{d}{dt}\, \int_{\Omega} a^n(r^{n+1}, \, r^{n+1}) \, dx + \frac{\lambda_0}{2} \, \int_{\Omega} |\nabla r^{n+1}|^2 \, dx \nonumber\\
& \leq K_0 \int_{\Omega} |r^{n+1}| \, (e^n+|\sigma^{n+1}|) \, (1+|v^n_x| + |q^n_x| + |q^n_t|) \, dx + K_0 \, \int_{\Omega} (|\varrho^{n+1}_t| + |q^n_t|) \, |r^{n+1}|^2 \, dx \nonumber\\
& \quad + K_0 \, \int_{\Omega} |r^{n+1}| \, (|w^n_x| + |r^n_x|) \, dx + K_0 \, \int_{\Omega} (e^n + |\sigma^{n+1}|)^2 \, (|q^n_x|^2+|\tilde{b}|^2) \, dx \, .
\end{align}
To transform the right-hand we apply H\"older's inequality, the Sobolev embedding theorem and Young's inequality according to the schema
\begin{align}\begin{split}\label{schema}
\int_{\Omega} a \, b \, c \, dx \leq & \|a\|_{L^3} \, \|b\|_{L^6} \, \|c\|_{L^2}\\
\leq & C \, \|a\|_{L^3} \, (\|\nabla b\|_{L^2} + \|b\|_{L^2}) \, \|c\|_{L^2}\\
\leq & \frac{\lambda_0}{4} \, \|\nabla b\|_{L^2}^2 + C^2 \, (\frac{1}{\lambda_0}+\frac{1}{4}) \, \|a\|_{L^3}^2 \, \|c\|_{L^2}^2 + \|b\|_{L^2}^2 \, .
\end{split}
\end{align}
We apply this first with $a = 1+ |v^n_x| + |q^n_x| + |q^n_t|$ and $b = r^{n+1}$ and $c = e^n + |\sigma^{n+1}|$. Thus,
\begin{align*}
& \int_{\Omega} |r^{n+1}| \, (e^n + |\sigma^{n+1}|) \, (1+|v^n_x| + |q^n_x| + |q^n_t|) \, dx \leq \frac{\lambda_0}{4} \, \|\nabla r^{n+1}\|_{L^2}^2 \\
& \quad + C^2 \, (\frac{1}{\lambda_0} + \frac{1}{4}) \, \|1+|v^n_x| + |q^n_x| + |q^n_t|\|_{L^3}^2 \, (\|e^n\|_{L^2}^2+\|\sigma^{n+1}\|^2_{L^2}) + \|r^{n+1}\|_{L^2}^2 \, .
\end{align*}
We choose next $a = |\varrho^{n+1}_t| + |q^n_t|$ and $b = r^{n+1} = c$, to get
\begin{align*}
\int_{\Omega} (|\varrho^{n+1}_t| + |q^n_t|) \, |r^{n+1}|^2 \, dx \leq & \frac{\lambda_0}{4} \, \|\nabla r^{n+1}\|_{L^2}^2 \\
& + [C^2 \, (\frac{1}{\lambda_0}+\frac{1}{4}) \, \||\varrho^{n+1}_t| + |q^n_t|\|_{L^3}^2 + 1] \, \|r^{n+1}\|_{L^2}^2 \, .
\end{align*}
Employing Young's inequality we find for $\delta > 0$ arbitrary that
\begin{align*}
\int_{\Omega} (e^n + |\sigma^{n+1}|)^2 \, (|q^n_x|^2 + |\tilde{b}|^2)\, dx & \leq (\|q^n_x\|_{L^{\infty}(\Omega)}^2 + \|\tilde{b}\|_{L^{\infty}(\Omega)}^2) \, (\|e^n\|_{L^2}^2 +\|\sigma^{n+1}\|_{L^2}^2) \, , \\
K_0 \, \int_{\Omega} |r^{n+1}| \, (|w^n_x| + |r^n_x|) \, dx & \leq \delta \, \int_{\Omega} (|w^n_x|^2 + |r^n_x|^2) \, dx + \frac{K^2_0}{4\delta} \, \int_{\Omega} |r^{n+1}|^2 \, dx \, .
\end{align*}
From \eqref{energy2} we deduce the inequality
\begin{align}\label{energy3}
& \frac{d}{dt}\, \int_{\Omega} a^n(r^{n+1}, \, r^{n+1}) \, dx + \frac{\lambda_0}{4} \, \int_{\Omega} |\nabla r^{n+1}|^2 \, dx \\
& \leq D(t) \, (\|e^n\|_{L^2}^2 + \|\sigma^{n+1}\|_{L^2}^2)+ D^{(1)}_{\delta}(t) \, \|r^{n+1}\|_{L^2}^2 + \delta \, \int_{\Omega} (|w^n_x|^2 + |r^n_x|^2) \, dx \, , \nonumber
\end{align}
in which the coefficients $D$ and $D^{(1)}_{\delta}$ satisfy
\begin{align*}
D(t) & \leq K_0 \, (|\Omega|^{\frac{2}{3}} +\|v^n_x(t)\|_{L^3}^2 + \|q^n_x(t)\|_{L^3}^2 + \|q^n_t(t)\|_{L^3}^2 + \|q^n_x(t)\|_{L^{\infty}}^2 + \|\tilde{b}(t)\|_{L^{\infty}}^2 ) \, ,\\
D^{(1)}_{\delta}(t) & \leq K_0 \, (\|\varrho^{n+1}_t(t)\|_{L^3}^2 + \|q^n_t(t)\|_{L^3}^2 + \delta^{-1}) \, .
\end{align*}
Next we multiply \eqref{difference2} with $\sigma^{n+1}$, integrate over $\Omega$, and this yields
\begin{align*}
\frac{1}{2} \, \frac{d}{dt} \int_{\Omega} |\sigma^{n+1}|^2 \, dx = - \frac{1}{2} \, \int_{\Omega} \divv v^n \, (\sigma^{n+1})^2 \, dx - \int_{\Omega} \divv( \varrho_n \, w^n) \, \sigma^{n+1} \, dx\, ,\\
\frac{1}{2} \, \frac{d}{dt} \int_{\Omega} |\sigma^{n+1}|^2 \, dx \leq \int_{\Omega} [\frac{1}{2} \, |v^n_x| \, |\sigma^{n+1}|^2 + K_0 \, |w^n_x| \, |\sigma^{n+1}| + |\varrho^{n}_x| \, |w^n| \, |\sigma^{n+1}|] \, dx \, .
\end{align*}
We note that $\int_{\Omega} \frac{1}{2} \, |v^n_x| \, (\sigma^{n+1})^2 \, dx\leq \frac{1}{2} \, \|v^n_x\|_{L^{\infty}(\Omega)} \, \|\sigma^{n+1}\|_{L^2}^2$, and
employing Young's inequality we see that $\int_{\Omega} K_0 \, |w^n_x| \, |\sigma^{n+1}|\, dx \leq \delta \, \int_{\Omega} |w^n_x|^2 \, dx + \frac{K^2_0}{4\delta} \, \|\sigma^{n+1}\|_{L^2}^2$. As already seen
\begin{align*}
\int_{\Omega} |\varrho^{n}_x| \, |w^n| \, |\sigma^{n+1}| \, dx \leq \delta \, \|\nabla w^n\|_{L^2}^2 + \frac{C^2}{4} \, \left(\frac{1}{\delta}+1\right) \, \|\varrho^{n}_x\|_{L^3}^2 \, \| \sigma^{n+1}\|_{L^2}^2 + \|w^n\|_{L^2}^2 \, ,
\end{align*}
allowing us to conclude that
\begin{align}\label{energy4}
\frac{1}{2} \, \frac{d}{dt} \int_{\Omega} |\sigma^{n+1}|^2 \, dx \leq 2\delta \, \int_{\Omega} |w^n_x|^2 \, dx + D^{(2)}_{\delta}(t) \, \|\sigma^{n+1}\|_{L^2}^2 + \|e^n\|_{L^2}^2 \, ,
\end{align}
in which $ D^{(2)}_{\delta}(t) \leq K_0 \, \delta^{-1} \, (1 + \|\varrho^{n}_x(t)\|_{L^3(\Omega)}^2+\|v^{n}_x(t)\|_{L^{\infty}(\Omega)}^2 )$. Finally, we multiply \eqref{difference3} with $w^{n+1}$ and obtain that
\begin{align*}
& \frac{\varrho^{n+1}}{2}\, \partial_t |w^{n+1}|^2 - \divv \mathbb{S}(\nabla w^{n+1}) \cdot w^{n+1} = - \nabla (P^n-P^{n-1})\cdot w^{n+1} + (R^n -R^{n-1}) \tilde{b} \cdot w^{n+1}\\
& - \sigma^{n+1} \, [\partial_t v^n+(v^{n} \cdot \nabla) v^n) - \bar{b}] \cdot w^{n+1} - \varrho_{n} \, [(w^{n} \cdot \nabla )v^n+(v^{n-1} \cdot \nabla )w^n]\cdot w^{n+1} \, .
\end{align*}
After integration over $\Omega$,
\begin{align*}
& \frac{1}{2} \, \frac{d}{dt} \int_{\Omega} \varrho^{n+1} \, |w^{n+1}|^2 \, dx + \int_{\Omega} \mathbb{S}(\nabla w^{n+1}) \cdot \nabla w^{n+1} \, dx \\
& = \frac{1}{2} \, \int_{\Omega} \partial_t \varrho^{n+1} \, |w^{n+1}|^2 \, dx + \int_{\Omega} (P^n-P^{n-1}) \, \divv w^{n+1} \, dx + \int_{\Omega}(R^n -R^{n-1}) \tilde{b}\cdot w^{n+1} \, dx \\
& - \int_{\Omega} \{\sigma^{n+1} \, [\partial_t v^n + (v^{n} \cdot \nabla) v^n - \bar{b}] - \varrho_{n} \, [(w^{n} \cdot \nabla )v^n+(v^{n-1} \cdot \nabla )w^n]\}\cdot w^{n+1} \, dx \, .
\end{align*}
We use $ \int_{\Omega} \mathbb{S}(\nabla w^{n+1}) \cdot \nabla w^{n+1} \, dx \geq \nu_0 \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx$. We estimate
\begin{align*}
\left| \int_{\Omega} (P^n-P^{n-1}) \, \divv w^{n+1} \, dx \right | & \leq \frac{\nu_0}{2} \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx + \frac{1}{2\nu_0} \, \int_{\Omega} |P^n-P^{n-1}|^2 \, dx\\
& \leq \frac{\nu_0}{2} \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx + \frac{K^2_0}{2\nu_0} \, \int_{\Omega} (|\sigma^{n+1}|^2+|r^n|^2) \, dx \, .
\end{align*}
Further,
\begin{align*}
|(R^n -R^{n-1}) \tilde{b}\cdot w^{n+1}| & \leq K_0 \, (|\sigma^{n+1}| + |r^n|) \, |\tilde{b}| \, |w^{n+1}| \, ,\\
|\sigma^{n+1} \, (\partial_t v^n + (v^{n} \cdot \nabla) v^n - \bar{b}) \cdot w^{n+1}| & \leq K_0 \, |\sigma^{n+1}| \, |w^{n+1}| \, (|v^n_t| + |v^n_x| + |\bar{b}|) \, ,\\
|\varrho_{n} \, [(w^{n} \cdot \nabla )v^n+(v^{n-1} \cdot \nabla )w^n]\cdot w^{n+1}| & \leq K_0 \, |w^{n+1}| \, (|w^n| \,|v^n_x| + |w^n_x|) \, .
\end{align*}
Thus,
\begin{align*}
& \frac{1}{2} \, \frac{d}{dt} \int_{\Omega} \varrho^{n+1} \, |w^{n+1}|^2 \, dx + \frac{\nu_0}{2} \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx\\
& \leq \frac{1}{2} \, \int_{\Omega} |\partial_t \varrho^{n+1}| \, |w^{n+1}|^2 \, dx
+ \frac{K^2_0}{2\nu_0} \, \int_{\Omega} (|\sigma^{n+1}|^2+|r^n|^2) \, dx + K_0 \, \int_{\Omega}|r^n| \, |\tilde{b}| \, |w^{n+1}| \, dx \\
& + K_0 \, \int_{\Omega} [|\sigma^{n+1}| \, |w^{n+1}| \, (|v^n_t| + |v^n_x| +|\tilde{b}|+ |\bar{b}|) + (|v^n_x| \, |w^n| + |w^n_x|) \, |w^{n+1}|] \, dx\, .
\end{align*}
By means of \eqref{schema} and Young's inequality, we can also show that
\begin{align*}
& K_0\, \int_{\Omega} [\sigma^{n+1} \, |w^{n+1}| \, (|v^n_t| + |v^n_x|+|\tilde{b}|+ |\bar{b}|) \, dx \leq \frac{\nu_0}{8} \, \|\nabla w^{n+1}\|_{L^2}^2 \\
& \quad + C^2K^2_0 \, (\frac{2}{\nu_0}+\frac{1}{4}) \, \||v^n_t| + |v^n_x|+|\tilde{b}|+ |\bar{b}|\|_{L^3}^2 \, \|\sigma^{n+1}\|_{L^2}^2 + \|w^{n+1}\|_{L^2}^2 \, ,\\
& K_0 \, \int_{\Omega}|r^n| \, |\tilde{b}| \, |w^{n+1}| \, dx \leq \frac{\nu_0}{8} \, \|\nabla w^{n+1}\|_{L^2}^2 +C^2K^2_0 \, (\frac{2}{\nu_0}+\frac{1}{4}) \, \|\tilde{b}\|_{L^3}^2 \, \|r^{n}\|_{L^2}^2 + \|w^{n+1}\|_{L^2}^2 \, ,\\
& K_0 \, \int_{\Omega} |v^n_x| \, |w^n| \, |w^{n+1}| \, dx \leq \frac{\nu_0}{8} \, \|\nabla w^{n+1}\|_{L^2}^2 +C^2K^2_0 \, (\frac{2}{\nu_0}+\frac{1}{4}) \, \|v^n_x\|_{L^3}^2 \, \|w^n\|_{L^2}^2 + \|w^{n+1}\|_{L^2}^2 \, , \\
& \int_{\Omega} |\partial_t \varrho^{n+1}| \, |w^{n+1}|^2 \, dx \leq \frac{\nu_0}{8} \, \|\nabla w^{n+1}\|_{L^2}^2 + \left(\frac{2C^2}{\nu_0} \, \|\varrho^{n+1}_t\|_{L^3}^2 + 1\right) \|w^{n+1}\|_{L^2}^2 \, ,\\
& K_0 \, \int_{\Omega} |w^n_x| \, |w^{n+1}| \, dx \leq \delta \, \int_{\Omega} |w^n_x|^2 \, dx + \frac{K^2_0}{4\delta} \, \int_{\Omega} |w^{n+1}|^2 \, dx \, .
\end{align*}
Overall, we obtain for the estimation of \eqref{difference3} that
\begin{align}\label{energy5}
& \frac{1}{2} \, \frac{d}{dt} \int_{\Omega} \varrho^{n+1} \, |w^{n+1}|^2 \, dx + \frac{\nu_0}{2} \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx \leq \delta \, \int_{\Omega} |w^n_x|^2 \, dx\nonumber\\
& \quad \qquad \quad + D^{(3)}(t) \, (\|e^n\|_{L^2}^2 + \|\sigma^{n+1}\|_{L^2}^2) + D^{(4)}_{\delta}(t) \, \|w^{n+1}\|_{L^2}^2 \, ,
\end{align}
in which $D^{(3)}(t) \leq K_0 \, (\|v^n_t\|_{L^3}^2 + \|v^n_x|\|_{L^3}^2+\|\tilde{b}\|_{L^3}^2+ \|\bar{b}\|_{L^3}^2)$ and $D^{(4)}_{\delta}(t) \leq K_0 \, (\|\varrho^{n+1}_t\|_{L^3}^2 + \delta^{-1})$.
We add the three inequalities \eqref{energy3}, \eqref{energy4} and \eqref{energy5} and get
\begin{align}\label{energy6}
& \frac{d}{dt}\, \int_{\Omega} \{a^n(r^{n+1}, \, r^{n+1}) +\tfrac{1}{2} \, |\sigma^{n+1}|^2 + \tfrac{1}{2} \, \varrho_n \, |w^{n+1}|^2\} \, dx \nonumber \\
& + \frac{\lambda_0}{2} \, \int_{\Omega} |\nabla r^{n+1}|^2 \, dx + \frac{\nu_0}{2} \, \int_{\Omega} |\nabla w^{n+1}|^2 \, dx \nonumber\\
& \leq 4 \, \delta \, \int_{\Omega} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2) \, dx + F_{\delta}(t) \, (\|e^n\|_{L^2}^2 +\|\sigma^{n+1}\|_{L^2}^2) + F^{(1)}_{\delta}(t) \, \|e^{n+1}\|_{L^2}^2 \, .
\end{align}
In this inequality we have introduced $F_{\delta}(t) := 1 + D(t) + D^{(2)}_{\delta}(t) + D^{(3)}(t)$, and $F_{\delta}^{(1)}(t) := D^{(1)}_{\delta}(t) + D^{(4)}_{\delta}(t)$. These definitions and the inequalities above show that
\begin{align*}
F_{\delta}(t) \leq K_0 \, \Big[ & (\|q^n_x\|_{L^3} + \|q^n_t\|_{L^3} +\|q^n_x\|_{L^{\infty}} +\|v^n_x\|_{L^3} + \|v^n_t\|_{L^3}+\|\varrho^n_x\|_{L^3})^2\\
& + \|\bar{b}\|_{L^3}^2+ \|\tilde{b}\|_{L^{\infty}}^2 + \|\tilde{b}\|_{L^{3}}^2 + \delta^{-1}\Big] \, .
\end{align*}
Consequently, due to embedding properties of $W^{2,p}(\Omega)$, it follows for $s \in [0, \, T_0]$ arbitrary and for $0< t_1 \leq T_0$ and $t \leq T_0-t_1$ that
\begin{align}\label{energy7}
& |F_{\delta}(s)| \leq C \, K_0 \, [\|q^n(s)\|_{W^{2,p}}^2 + \|v^n(s)\|_{W^{2,p}}^2 + \|\varrho^n_x(s)\|_{L^{p}}^2 + \|\tilde{b}(s)\|_{W^{1,p}}^2 + \|\bar{b}(s)\|^2_{L^p} + \delta^{-1}] \, , \nonumber\\
& \int_{t}^{t+t_1} F_{\delta}(s) \, ds \leq \tilde{K}_0 \, \{ t_1^{1-\frac{2}{p}} \, [\|q^n\|_{W^{2,1}_p(Q_{T_0})}^2 + \|v^n\|_{W^{2,1}_p(Q_{T_0})}^2 +\|\tilde{b}\|_{W^{1,0}_p(Q_{T_0})}^2 + \|\bar{b}\|^2_{L^p(Q_{T_0})} ] \nonumber \\
&\phantom{\int_{t}^{t+t_1} F_{\delta}(s) \, ds \leq C \, K_0 \,} + t_1 \, [\|\varrho^n_x\|_{L^{p,\infty}(Q_{T_0})}^2 + \, \delta^{-1}]\}\nonumber\\
& \phantom{\int_{t}^{t+t_1} F_{\delta}(s) \, ds } \leq C_0 \, (1+\delta^{-1}) \, t_1^{1-\frac{2}{p}} \, .
\end{align}
Here we use the uniform bounds \eqref{uniform}.
Similarly we show that $F^{(1)}_{\delta}(s) \leq K_0 \, [\|q^n_t\|_{L^{3}(\Omega)} + \|\varrho^{n+1}_t\|_{L^{3}(\Omega)} + \delta^{-1}]$ to show that
\begin{align}\label{energy8}
\int_{t}^{t+t_1} F^{(1)}_{\delta}(s) \, ds & \leq \tilde{K}_0 \,\{ t_1^{1-\frac{2}{p}} \, \|q^n_t\|_{L^p(Q_{T_0})}^2 + t_1 \, [\|\varrho^{n+1}_t\|_{L^{p,\infty}(Q_{T_0})}^2+ \delta^{-1}]\}\nonumber\\
& \leq C_1 \, (1+\delta^{-1}) \, t_1^{1-\frac{2}{p}} \, .
\end{align}
We integrate \eqref{energy6} over $[t, \, \tau]$ for $t_1 \leq T_0$, $t \leq T_0 - t_1$ and $t \leq \tau \leq t+t_1$ arbitrary. Note that
\begin{align*}
& \int_{\Omega} \{a^n(r^{n+1}, \, r^{n+1}) +\tfrac{1}{2} \, |\sigma^{n+1}|^2 + \tfrac{1}{2} \, \varrho_n \, |w^{n+1}|^2\}(\tau) \, dx\\
& \quad \geq \frac{1}{2} \, \int_{\Omega} \{\lambda_{\inf}(R_q^n) \, |r^{n+1}|^2 + |\sigma^{n+1}|^2 + \inf_{Q_{T_0}} \varrho_n \, |w^{n+1}|^2\}(\tau)\} \, dx\\
& \quad \geq \frac{1}{2} \, \min\{1, \, \lambda_{\inf}(R_q^n), \, \inf_{Q_{T_0}} \varrho_n\} \, (\|e^{n+1}(\tau)\|_{L^2}^2 + \|\sigma^{n+1}(\tau)\|_{L^2}^2) \, .
\end{align*}
Invoking \eqref{uniform}, there is a uniform $k_0>0$ such that $\frac{1}{2} \, \min\{1, \, \lambda_{\inf}(R_q^n), \, \inf_{Q_{T_0}} \varrho_n\} \geq k_0 > 0$. We also define $p_0 := \min\{\lambda_0, \, \nu_0\}$. This shows the inequality
\begin{align}\label{energy9}
& k_0 \, ( \|e^{n+1}(\tau)\|_{L^2}^2 + \|\sigma^{n+1}(\tau)\|_{L^2}^2) + \frac{p_0}{2} \int_{Q_{t,\tau}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2) \nonumber\\
& \leq \delta \, \int_{Q_{t,\tau}} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2)\nonumber\\
& + \int_{t}^{\tau} \, F_{\delta}(s) \, (\|e^n(s)\|_{L^2}^2 +\|\sigma^{n+1}(s)\|_{L^2}^2) \, ds + \int_{t}^{\tau} \, F^{(1)}_{\delta}(s) \, \|e^{n+1}(s)\|_{L^2}^2 \, ds \, .
\end{align}
Thus, taking the supremum over all $\tau \in [t, \, t+t_1]$ yields
\begin{align*}
& k_0 \, \sup_{t \leq \tau \leq t + t_1} ( \|e^{n+1}(\tau)\|_{L^2}^2+ \|\sigma^{n+1}(\tau)\|_{L^2}^2) \leq \delta \, \int_{Q_{t,t+t_1}} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2) \\
& + \int_{t}^{t+t_1} \, F_{\delta}(s) \, ds \, \sup_{t \leq \tau \leq t_1} (\|e^n(\tau)\|_{L^2}^2 +\|\sigma^{n+1}(\tau)\|_{L^2}^2) + \int_{t}^{t+t_1} \, F^{(1)}_{\delta}(s) \, ds \, \sup_{t \leq \tau \leq t+t_1} \|e^{n+1}(\tau)\|_{L^2}^2 \, .
\end{align*}
On the other hand, choosing $\tau = t+t_1$ in \eqref{energy9} shows that also $\frac{\min\{\lambda_0, \, \nu_0\}}{2} \int_{Q_{t,t+t_1}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2) \, dx$ is estimated above by the same right-hand. Thus
\begin{align*}
& k_0 \, \sup_{t \leq \tau \leq t + t_1} (\|e^{n+1}(\tau)\|_{L^2}^2 +\|\sigma^{n+1}(\tau)\|_{L^2}^2 )+ \frac{p_0}{2}\, \int_{Q_{t,t+t_1}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2) \\
& \leq 2 \, \delta \, \int_{Q_{t,t+t_1}} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2) + 2 \, \int_{t}^{t+t_1} \, F^{(1)}_{\delta}(s) \, ds \, \sup_{t \leq \tau \leq t+ t_1} \|e^{n+1}(\tau)\|_{L^2}^2 \\
& \qquad + 2 \, \int_{t}^{t+t_1} \, F_{\delta}(s) \, ds \, \sup_{t \leq \tau \leq t+ t_1} (\|e^n(\tau)\|_{L^2}^2 ++\|\sigma^{n+1}(\tau)\|_{L^2}^2 ) \, .
\end{align*}
We choose $\delta_0 = \frac{p_0}{8}$ and $0 < t_1 < T_0-t$ such that $ 2 \, \int_{t}^{t+t_1} \, F^{(1)}_{\delta_0}(t) \, dt \leq \frac{k_0}{2}$. In view of \eqref{energy8}, it is sufficient to satisfy the condition $C_1 \, \left(1+ \frac{8}{p_0}\right) \, t_1^{1-\frac{2}{p}} \leq \frac{k_0}{4}$. Then
\begin{align*}
& \frac{k_0}{2} \, \sup_{t \leq \tau \leq t + t_1} ( \|e^{n+1}(\tau)\|_{L^2}^2 +\|\sigma^{n+1}(\tau)\|_{L^2}^2 ) + \frac{p_0}{2} \int_{Q_{t,t+t_1}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2) \\
& \leq \frac{p_0}{4} \, \int_{Q_{t,t+t_1}} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2) + 2 \, \int_{t}^{t+t_1} \, F_{\delta_0}(s) \, ds \, \sup_{t \leq \tau \leq t + t_1} ( \|e^n(\tau)\|_{L^2}^2 +\|\sigma^{n+1}(\tau)\|_{L^2}^2 )\, .
\end{align*}
By requiring that $C_0 \, \left(1+ \frac{8}{p_0}\right) \, t_1^{1-\frac{2}{p}} \leq \frac{k_0}{8}$, we choose $t_1$ such that $ \int_{t}^{t+t_1} \, F_{\delta_0}(s) \, ds \leq \frac{k_0}{8}$ (use \eqref{energy7}). It follows that
\begin{align*}
& \frac{k_0}{4} \, \sup_{t \leq \tau \leq t + t_1} ( \|e^{n+1}(\tau)\|_{L^2}^2 +\|\sigma^{n+1}(\tau)\|_{L^2}^2 ) + \frac{p_0}{2} \, \int_{Q_{t,t+t_1}} (|\nabla r^{n+1}|^2 + |\nabla w^{n+1}|^2)\\
& \leq \frac{k_0}{4} \, \sup_{t \leq \tau \leq t+ t_1} \|e^{n}(\tau)\|_{L^2}^2 + \frac{p_0}{4} \, \int_{Q_{t,t+t_1}} (|\nabla r^{n}|^2 + |\nabla w^{n}|^2)\, .
\end{align*}
The claim follows.
\end{proof}
In order to complete the proof of the Theorems \ref{MAIN}, \ref{MAIN2} it remains to investigate the characterisation of the maximal existence time $T^*$.
\begin{lemma}\label{MAXEX}
Suppose that $u = (q, \, \varrho, \, v) \in \mathcal{X}_{t}$ is a solution to $\widetilde{\mathscr{A}}(u) = 0$ and $u(0) = u_0$ for all $t < T^*$. If for some $\alpha > 0$ the quantity $\mathscr{N}(t) := \|q\|_{C^{\alpha,\frac{\alpha}{2}}(Q_{t})} + \|\nabla q\|_{L^{\infty,p}(Q_{t})} + \|v\|_{L^{z \, p,p}(Q_{t})} + \int_{0}^{t} [\nabla v(s)]_{C^{\alpha}(\Omega)} \, ds$ is finite for $t \nearrow T^*$, then it is possible to extend the solution to a larger time interval.
\end{lemma}
\begin{proof}
To show this claim we first note that the components of $v_x$ have all spatial mean-value zero over $\Omega$ due to the boundary condition \eqref{lateralv}. Thus, the inequalities $\|v_x(s)\|_{L^{\infty}(\Omega)} \leq c_{\Omega} \, [v_x(s)]_{C^{\alpha}(\Omega)}$ and $\|v_x\|_{L^{\infty,1}(Q_t)} \leq c_{\Omega} \, \int_0^t [ v_x(s)]_{C^{\alpha}(\Omega)} \, ds$ are valid. Invoking the Proposition \ref{solonnikov2}, we thus see that $(m(t))^{-1}, \, M(t)$ and $\sup_{s \leq t} [\varrho(s)]_{C^{\alpha}(\Omega)}$ are all bounded by a function of $\int_0^t [ v_x(s)]_{C^{\alpha}(\Omega)} \, ds$, thus also by a function of $\mathcal{N}(t)$.
Invoking further the estimates of Proposition \ref{solonnikov2}, we also see that
\begin{align*}
\|\varrho_x(s)\|_{L^p(\Omega)} \leq & \phi(R_0, \, \|v_x\|_{L^{\infty,1}(Q_s)}) \, (1+ \int_{0}^s\|v_{x,x}(\tau)\|_{L^{p}(\Omega)} \, d\tau) \\
\leq & \phi(R_0, \, \mathcal{N}(s)) \, (1+ \mathscr{V}(s; \, v)) \, ,
\end{align*}
for all $s \geq 0$, with a function $\phi$ increasing in its arguments. Next we apply the Corollary \ref{normvmain}. Due to the fact that $(m(t))^{-1}, \, M(t)$ and $\sup_{s \leq t} [\varrho(s)]_{C^{\alpha}(\Omega)}$ are bounded by a function of $\mathcal{N}(t)$, this yields $\mathscr{V}(t; \, v) \leq \phi(t, \, \mathcal{N}(t)) \, (\|f\|_{L^p(Q_t)} + \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)})$. We recall the form \eqref{A3right} of the function $f$, and estimate
\begin{align*}
|f(x, \, t)| \leq & |\nabla \varrho| \, \sup_{Q_t} |R_{\varrho}(\varrho, \, q)| +|\nabla q| \, \sup_{Q_t} |R_{q}(\varrho, \, q)| \\
& + c \, (|v(x, \, t)| \, |v_x(x, \, t)| + |\bar{b}(x, \, t)| + |\tilde{b}(x, \, t)|) \, \sup_{Q_t} \varrho \, .
\end{align*}
We can bound the coefficients via $\sup_{Q_t} |R_{\varrho}(\varrho, \, q)| \leq \phi(M(t), \, \|q\|_{L^{\infty}(Q_t)}) \leq \phi(\mathcal{N}(t))$, etc.
Therefore, we can show that
\begin{align*}
\|f\|_{L^p(Q_t)}^p \leq \phi(\mathcal{N}(t)) \, (\|\nabla \varrho\|_{L^p(Q_t)}^p + \|\nabla q\|_{L^p(Q_t)}^p + \|v \, \nabla v\|_{L^p(Q_t)}^p + \|\tilde{b}\|_{L^p(Q_t)}^p + \|\bar{b}\|_{L^p(Q_t)}^p) \, .
\end{align*}
Using the abbreviation $A_0(t) := \|\tilde{b}\|_{L^p(Q_t)}^p + \|\bar{b}\|_{L^p(Q_t)}^p + \|v^0\|_{W^{2-\frac{2}{p}}_p(\Omega)}$ we obtain, after straightforward computations
\begin{align*}
\mathscr{V}^p(t; \, v) \leq \phi(t, \, \mathcal{N}(t)) \, ( & \|\nabla \varrho\|_{L^p(Q_t)}^p + \|v \, \nabla v\|_{L^p(Q_t)}^p + \|\nabla q\|_{L^p(Q_t)}^p + A_0(t)) \, .
\end{align*}
As shown, we have $\|\nabla \varrho\|_{L^p(Q_t)}^p \leq \phi(R_0, \, \|v_x\|_{L^{\infty,1}(Q_s)}) \, \int_0^t (1+ \mathscr{V}(s; \, v))^p \, ds$. Recall the continuity of $W^{2-\frac{2}{p}}_p \subset L^{\frac{3p}{(5-p)^+}}$ (cf. Rem. \ref{parabolicspace}). Choosing $z = \frac{3}{p-2}$ if $3 < p < 5$, $z > 1$ arbitrary if $p = 5$ and $z = 1$ if $p > 5$, we are thus able to also show by means of H\"older's inequality that $\|v \, v_x\|_{L^p(Q_t)}^p \leq \int_0^t \|v(s)\|^p_{L^{z\, p}} \, \mathscr{V}^p(s; \, v) \, ds$.
Invoking the Gronwall Lemma yields $\mathscr{V}^p(t; \, v) \leq \phi(t, \, R_0, \,\mathcal{N}(t)) \, (\|\nabla q\|_{L^p(Q_t)}^p + A_0(t))$. Since $\|\nabla q\|_{L^p(Q_t)}$ is also controlled by $t$ and $\mathcal{N}(t)$, we obtain that $\mathscr{V}^p(t; \, v) \leq \phi(t, \, R_0, \,\mathcal{N}(t))$. Obviously we now have also $\|\nabla \varrho\|_{L^{p,\infty}(Q_t)}^p \leq \phi(t, \, R_0, \,\mathcal{N}(t)) $. The Corollary \ref{normrhomain} yields that $\|\varrho\|_{C^{\beta, \frac{\beta}{2}}(Q_t)} \leq \phi(t, \, R_0, \,\mathcal{N}(t))$ for $\beta = 1-\frac{3}{p}$.
To show the final claim, we reconsider the inequality \eqref{ControlProducts0} in the proof of Corollary \ref{A1linUMFORMsecond}. Note that a solution $(q, \,\varrho, \, v)$ is a fixed-point of $\mathcal{T}$, so that this inequality is valid with $q^* = q$, $\varrho^* = \varrho$ and $v^* =v$. The factors $\phi^*_{1,t}, \, \phi^*_{2,t}$ are increasing functions of $(m(t))^{-1}$, $M(t)$, $\|q\|_{L^{\infty}(Q_t)}$ and $[q]_{C^{\beta,\frac{\beta}{2}}(Q_t)}$, $[\varrho]_{C^{\beta,\frac{\beta}{2}}(Q_t)}$.
With the preliminary considerations in this proof, we thus can state that
\begin{align*}
\mathscr{V}^p(t, \, q) \leq \phi(t, \, \mathcal{N}(t)) \, (& \|q^0\|^p_{W^{2-\frac{2}{p}}_p(\Omega)} + \|q\|^p_{W^{1,0}_p(Q_t)} + \|g\|_{L^p(Q_t)}^p \nonumber\\
& + \|\nabla \varrho \cdot \nabla q\|^p_{L^p(Q_t)} + \|\nabla q \cdot \nabla q\|^p_{L^p(Q_t)}) \, ,
\end{align*}
Invoking \eqref{A1right} and the fact that $\|v\|_{W^{2,1}_p(Q_t)} + \|\nabla \varrho\|_{L^{p,\infty}(Q_t)}^p$ and, by definition $\|\nabla q\|_{L^p(Q_t)}$ are all bounded by $\mathcal{N}(t)$, we can obtain the inequality
\begin{align*}
\mathscr{V}^p(t; \, q) \leq \phi(t, \, D_0, \, \mathcal{N}(t)) \, (1 + \int_{0}^t \|\nabla q(s)\|^p_{L^{\infty}(\Omega)} \, (\|\nabla \rho(s)\|_{L^p}^p +\|\nabla q(s)\|_{L^p}^p ) \, ds \, .
\end{align*}
Thus, if $ \|\nabla q(s)\|^p_{L^{\infty}(\Omega)}$ is integrable in time, we obtain by means of Grownall an independent estimate in terms of $\mathcal{N}(t)$. The claim follows.
\end{proof}
\section{Estimates for the solutions to the second linearisation}\label{contiT1}
We now consider the equations \eqref{linearT1second}, \eqref{linearT2second}, \eqref{linearT3second} underlying the definition of the map $\mathcal{T}^1$. Here the data is a pair $(r^*, \, w^*) \in \phantom{}_0\mathcal{Y}_T$, and we want to find the image $(r, \, w)$ in the same space as well as $\sigma \in \phantom{}_0W^{1,1}_{p,\infty}(Q_T)$ by solving these equations. The solvability will not be discussed, since it can be easily obtained by linear continuation using the estimates. We shall therefore go directly for the estimates.
The first point consists in obtaining estimates for solutions to a perturbed continuity equation. Precisely for this point, we need to assume more regularity of the function $\hat{\varrho}_0$.
\begin{lemma}\label{PerturbConti}
Assume that $\hat{\varrho}^0 \in W^{2,0}_p(Q_T)$, that $v, \, w \in W^{2,1}_p(Q_T; \, \mathbb{R}^3)$ and that $\sigma \in W^{1,1}_{p,\infty}(Q_T)$ solves $\partial_t \sigma +\divv(\sigma \, v + \hat{\varrho}^0 \, w) = 0$ in $Q_T$ with $\sigma(x, \, 0) = 0$ in $\Omega$. Then there are constants $c, \, C > 0$ depending only on $\Omega$, such that for all $s \leq T$ we have
\begin{align*}
\|\sigma(s)\|_{W^{1,p}(\Omega)}^p \leq & C \, \exp(c \, \int_0^s [\|v_x\|_{L^{\infty}(\Omega)} + \|v_{x,x}\|_{L^p(\Omega)} + 1] \, ds) \, \times \\
& \times (\|\hat{\varrho}^0\|_{W^{2,0}_p(Q_s)}^p \, \|w\|_{L^{\infty}(Q_s)}^p + \|\hat{\varrho}^0\|_{W^{1,1}_{p,\infty}(Q_s)}^p \, \|w\|_{W^{2,0}_p(Q_s)}^p) \, .
\end{align*}
\end{lemma}
\begin{proof}
After some obvious technical steps, we can show that the components $z_i := \sigma_{x_i}$ ($i=1,2,3$) of the gradient of $\sigma$ satisfy, in the sense of distributions,
\begin{align*}
\partial_t z_i + \divv(z_i \, v) = -\divv(\sigma \, v_{x_i}) - \divv(\hat{\varrho}^0_{x_i} \, w + \hat{\varrho}^0 \, w_{x_i}) =: -\divv(\sigma \, v_{x_i}) + R_i \, .
\end{align*}
The right-hand side is bounded in $L^p(Q_t)$, and the velocity $v$ belongs to $W^{2,1}_p(Q_t)$. Thus, $z_i$ is also a renormalised solution to the latter equation. Without entering the details of this notion, the following identity is valid in the sense of distributions:
\begin{align*}
\partial_t f(z) + \divv(f(z) \, v) + (z \cdot f_z(z) - f(z)) \, \divv v = \sum_{i=1}^3 f_{z_i}(z) \, (-\divv(\sigma \, v_{x_i}) + R_i)
\end{align*}
for every globally Lipschitz continuous function $f \in C^1(\mathbb{R}^3)$. We integrate the latter identity over $Q_t$. Recall that $\sigma(x, \, 0) = 0$ in $\Omega$ by assumption. If $f(0) = 0$, we then obtain that
\begin{align*}
\int_{\Omega} f(z(t)) \, dx + \int_{Q_t} (z \cdot f_z(z) - f(z)) \, \divv v \, dxds = \int_{Q_t} f_{z}(z) \cdot (-\divv(\sigma \, v_{x}) + R) \, dxds \, .
\end{align*}
By means of a standard procedure, we approximate the function $f(z) = |z|^p$ by means of a sequence of smooth Lipschitz continuous functions $\{f_m\}$. This yields
\begin{align*}
\int_{\Omega} |z(t)|^p \, dx + (p-1) \, \int_{Q_t} |z|^p \, \divv v \, dxds = p \, \int_{Q_t} |z|^{p-2} \, z \cdot (-\divv(\sigma \, v_{x}) + R) \, dxds \, .
\end{align*}
The estimates below will establish that all members in the latter identity are finite. We first use H\"older's inequality and note that
\begin{align*}
\left| \int_{Q_t} |z|^{p-2} \, z \cdot \divv(\sigma \, v_{x})\right| \leq & \int_{Q_t} |z|^{p-1} \, (|z| \, |v_x| + |\sigma| \, |v_{x,x}|)\\
\leq & \int_{Q_t} |z|^{p} \, |v_x| \, dxds + \int_{0}^t \|v_{x,x}\|_{L^p(\Omega)} \, \|z\|_{L^p(\Omega)}^{p-1} \, \|\sigma\|_{L^{\infty}(\Omega)} \, ds \, .
\end{align*}
Next, we recall that for a solution to $ \partial_t \sigma + \divv(\sigma \, v + \hat{\varrho}_0 \, w) = 0$, the integral $\int_{\Omega} \sigma(t, \, x) \, dx$ is conserved and equal to zero. Due to the Poincar\'e inequality, we therefore have $\|\sigma(t)\|_{L^p(\Omega)} \leq c_0 \, \|z(t)\|_{L^p(\Omega)}$ and, by the Sobolev embedding, also that $\|\sigma(t)\|_{L^{\infty}(\Omega)} \leq \tilde{c}_0 \, \|z(t)\|_{L^p(\Omega)}$. Thus,
\begin{align*}
\left| \int_{Q_t} |z|^{p-2} \, z \cdot \divv(\sigma \, v_{x})\right| \leq \int_0^t (\|v_x\|_{L^{\infty}(\Omega)} + \tilde{c}_0 \, \|v_{x,x}\|_{L^p(\Omega)}) \, \|z\|_{L^p(\Omega)}^{p} \, ds \, .
\end{align*}
Moreover, by Young's inequality,
\begin{align*}
& \int_{Q_t} |z|^{p-2} \, z \cdot R \, dxds \leq \int_{0}^t \|z\|_{L^p(\Omega)}^{p} \, ds + c_p \, \int_{0}^t \|R\|_{L^p(\Omega)}^p \, ds \leq \int_{0}^t \|z\|_{L^p(\Omega)}^{p} \, ds \\
& \qquad + c_p \, \int_{0}^t [\|w\|_{L^{\infty}(\Omega)}^p \, \|\hat{\varrho}^0_{x,x}\|_{L^p(\Omega)}^p + 2 \, \|\hat{\varrho}^0_{x} \, w_x\|^p_{L^p(\Omega)} + \|\hat{\varrho}^0\|_{L^{\infty}(\Omega)}^p \, \|w_{x,x}\|_{L^p(\Omega)}^p] \, ds \, .
\end{align*}
Further,
\begin{align*}
\int_{0}^t \|w\|_{L^{\infty}(\Omega)}^p \, \|\hat{\varrho}^0_{x,x}\|_{L^p(\Omega)}^p \, ds &\leq \|w\|_{L^{\infty}(Q_t)}^p \, \|\hat{\varrho}^0\|_{W^{2,0}_p(Q_t)}^p\, ,\\
\int_{0}^t \|\hat{\varrho}^0_{x} \, w_x\|^p_{L^p(\Omega)} & \leq \|w_x\|_{L^{\infty,p}(Q_t)}^p \, \|\hat{\varrho}^0_x\|_{L^{p,\infty}(Q_t)}^p
\leq C \, \|\hat{\varrho}^0_x\|_{L^{p,\infty}(Q_t)}^p \, \|w\|_{W^{2,0}_p(Q_t)}^p \, ,\\
\int_{0}^t \|\hat{\varrho}^0\|_{L^{\infty}(\Omega)}^p \, \|w_{x,x}\|_{L^p(\Omega)}^p \, ds & \leq \|\hat{\varrho}^0\|_{L^{\infty}(Q_t)}^p \, \|w\|_{W^{2,0}_p(Q_t)}^p \, .
\end{align*}
Thus,
\begin{align*}
\int_{\Omega} |z(t)|^p \, dx \leq & (p-1) \, \int_0^t [\|v_x\|_{L^{\infty}(\Omega)} + \tilde{c}_0 \, \|v_{x,x}\|_{L^p(\Omega)} + p^{\prime}] \, \|z(s)\|_{L^p(\Omega)}^{p} \, ds\\
& + p \, c_p \, [\|\hat{\varrho}^0\|_{W^{2,0}_p(Q_t)}^p \, \|w\|_{L^{\infty}(Q_t)}^p + \|\hat{\varrho}^0\|_{W^{1,1}_{p,\infty}(Q_t)}^p \, \|w\|_{W^{2,0}_p(Q_t)}^p] \, .
\end{align*}
The claim follows by means of the Gronwall Lemma.
\end{proof}
We need next an estimate for the operators $(g^1)^{\prime}$ and $(f^1)^{\prime}$ from the right-hand side of \eqref{equationdiff1}, \eqref{equationdiff3}.
\begin{lemma}\label{RightHandControlsecond}
Let $\hat{u}_0 := (\hat{q}^0, \, \hat{\varrho}^0, \, \hat{v}^0) \in \mathcal{X}_{T,+}$ with $\hat{\varrho}^0 \in W^{2,0}_p(Q_T)$. Let $(r^*, \, w^*) \in \phantom{}_0\mathcal{Y}_T$, and $u^* := (\hat{q}^0 + r^*, \mathscr{C}(\hat{v}^0+w^*), \, \hat{v}^0+w^*) \in \mathcal{X}_{T,+}$ (cf. \eqref{ustar}). Let $(r, \, w) \in \phantom{}_0\mathcal{Y}_T$, and denote $\sigma $ the function obtained via solution of \eqref{linearT2second} with $v^* = \hat{v}^0+w^*$. We define $\bar{u} := (r, \, \sigma, \, w) \in \phantom{}_0\mathcal{X}_T$. Then the operators $(g^1)^{\prime}$ and $(f^1)^{\prime}$ in the right-hand of \eqref{equationdiff1}, \eqref{equationdiff3} satisfy
\begin{align*}
& \|(g^1)^{\prime}(u^*,\, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p + \|(f^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p \leq K_2^*(t) \, \int_{0}^t \mathscr{V}^p(s) \, K^*_1(s) \, ds \, ,
\end{align*}
with functions $K^*_1 \in L^1(0,T)$ and $K_2^* \in L^{\infty}(0,T)$. There is a function $\Phi^* = \Phi^*(t, \, a_1,\ldots,a_5)$ defined for all $t, \, a_1, \ldots, a_5 \geq 0$, continuous and increasing in all arguments, such that for all $t \leq T$
\begin{align*}
\|K^*\|_{L^1(0,t)} + \|K^*_2\|_{L^{\infty}(0,t)} \leq \Phi^*(t, \, \mathscr{V}^*(t),\, \|\hat{u}^0\|_{\mathcal{X}_t}, \, \|\hat{\varrho}^0\|_{W^{2,0}_p(Q_t)}, \, \|\tilde{b}\|_{W^{1,0}_p(Q_t)}, \, \|\bar{b}\|_{L^p(Q_t)}) \, .
\end{align*}
Here we used the abbreviations $\mathscr{V}(t) := \mathscr{V}(t; \, r) + \mathscr{V}(t; \, w)$ and $\mathscr{V}^*(t) := \mathscr{V}(t; \, r^*) + \mathscr{V}(t; \, w^*)$.
\end{lemma}
\begin{proof}
At first we estimate $(g^1)^{\prime}$. Starting from \eqref{Arightlinear2}, we obtain by elementary means that
\begin{align*}
|(g^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}| \leq & |g^1_q(u^*, \, \hat{u}^0)| \, |r| + |g^1_{\varrho}(u^*, \, \hat{u}^0)| \, |\sigma| + |g^1_{v}(u^*, \, \hat{u}^0)| \,|w|\\
& + |g^1_q(u^*, \, \hat{u}^0)| \, |r_x| + |g^1_{\varrho}(u^*, \, \hat{u}^0)| \, |\sigma_x| + |g^1_{v}(u^*, \, \hat{u}^0)| \,|w_x| \, .
\end{align*}
We define $z = \frac{3p}{3-(5-p)^+}$, and by means of H\"older's inequality we obtain first that
\begin{align*}
\|(g^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p \leq & \int_{0}^t \{\|g^1_q\|_{L^p(\Omega)}^p \, \|r\|_{L^{\infty}(\Omega)}^p + \|g^1_{q_x}\|_{L^z(\Omega)}^p \, \|r_x\|_{L^{\frac{3p}{(5-p)^+}}(\Omega)}^p\} \, ds \\
& +\int_{0}^t \{\|g^1_v\|_{L^p(\Omega)}^p \, \|w\|_{L^{\infty}(\Omega)}^p + \|g^1_{v_x}\|_{L^z(\Omega)}^p \, \|w_x\|_{L^{\frac{3p}{(5-p)^+}}(\Omega)}^p\} \, ds\\
& + \int_{0}^t \{\|g^1_{\varrho}\|_{L^p(\Omega)}^p \, \|\sigma\|_{L^{\infty}(\Omega)}^p + \|g^1_{\varrho_x}\|_{L^{\infty}(\Omega)}^p \, \|\sigma_x\|_{L^p(\Omega)}^p\} \, ds \, .
\end{align*}
Making use of the embeddings $W^{2-\frac{2}{p}}_p \subset L^{\frac{3p}{(5-p)^+}}$ and of $W^{1,p} \subset L^{\infty}(\Omega)$ (recall also that the means of $\sigma$ over $\Omega$ is zero at every time!), we show that
\begin{align*}
\|(g^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p \leq & \int_{0}^t \sup_{\tau \leq s} \{\|r(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p + \|w(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p \} \, K_1(s) \, ds \\
& + \int_{0}^t K_2(s)\, \|\sigma_x(s)\|_{L^p(\Omega)}^p \, ds \, , \\
K_1(s) := & \|g^1_q(s)\|_{L^p(\Omega)}^p + C \, \|g^1_{q_x}(s)\|_{L^z(\Omega)}^p + \|g^1_v(s)\|_{L^p(\Omega)}^p +C \, \|g^1_{v_x}(s)\|_{L^z(\Omega)}^p \, ,\\
K_2(s) := & C\, \|g^1_{\varrho}(s)\|_{L^p(\Omega)}^p + \|g^1_{\varrho_x}(s)\|_{L^{\infty}(\Omega)}^p \, .
\end{align*}
We invoke the Lemmas \ref{gedifferential}, \ref{gedifferentialun} to see that $K_1$ and $K_2$ are integrable functions and their norm are controlled by the data. Recall also that the minimum and the maximum of the function $\varrho^* := \mathscr{C}(\hat{v}^0+w^*)$, which enter the estimates via the coefficients, are controlled by a function of $\mathscr{V}(t; \, \hat{v}^0+w^*)$.
For the terms containing $\sigma_x$, we use the result of Lemma \ref{PerturbConti}. It yields for $s \leq t$ in particular that
\begin{align*}
\|\sigma(s)\|_{W^{1,p}(\Omega)} \leq & K_3(s) \, \|w\|_{L^{\infty}(Q_s)}^p + K_4(s) \, \|w\|_{W^{2,0}_p(Q_s)}^p \, ,\\
K_3(s) := & C \, \exp(c \, \int_0^s (\|v^*_x\|_{L^{\infty}(\Omega)} + \|v^*_{x,x}\|_{L^p(\Omega)} + 1) \, d\tau) \, \|\hat{\varrho}^0\|_{W^{2,0}_p(Q_s)}^p \, ,\\
K_4(s) := & C \, \exp(c \, \int_0^s (\|v^*_x\|_{L^{\infty}(\Omega)} + \|v^*_{x,x}\|_{L^p(\Omega)} + 1) \, d\tau) \, \|\hat{\varrho}^0\|_{W^{1,1}_{p,\infty}(Q_s)}^p \, .
\end{align*}
We obtain that
\begin{align*}
\int_{0}^t K_2(s) \, \|\sigma_x(s)\|_{L^p(\Omega)}^p \, ds \leq \max\{K_3(t), \,K_4(t)\} \, \int_{0}^t K_2(s) \, [\|w\|_{L^{\infty}(Q_s)}^p + \|w\|_{W^{2,0}_p(Q_s)}^p] \, ds \, .
\end{align*}
Overall, since $\|w\|_{L^{\infty}(Q_s)} \leq c \, \sup_{\tau \leq s} \|w(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}$, we obtain that
\begin{align*}
\|(g^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p \leq & \int_{0}^t \sup_{\tau \leq s} \{\|r(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p + \|w(\tau)\|_{W^{2-\frac{2}{p}}_p(\Omega)}^p \} \, K_1(s) \, ds\\
& + \max\{K_3(t), \,K_4(t)\} \, \int_{0}^t K_2(s) \, [\|w\|_{L^{\infty}(Q_s)}^p + \|w\|_{W^{2,0}_p(Q_s)}^p] \, ds\\
\leq & c \, \max\{1, \, K_3(t), \,K_4(t)\} \, \int_{0}^t \mathscr{V}^p(s; \, w) \, (K_1(s)+K_2(s)) \, ds \, .
\end{align*}
We can prove a similar result for $f_1^{\prime}$. This finishes to prove the estimate.
\end{proof}
\section{Existence of a unique fixed-point of $\mathcal{T}^1$}\label{FixedPointT1}
We are now in the position to prove the continuity estimate for $\mathcal{T}^1$. We assume that $(r, \, \sigma, \, w)$ satisfy the equations \eqref{linearT1second},\eqref{linearT2second}, \eqref{linearT3second} with data $(r^*, \, w^*)$. We apply the Proposition \ref{A1linmain} to \eqref{linearT1second}, and making use of the fact that $r(0, \, x) =0$ in $\Omega$, we get an estimate
\begin{align}\label{difference4}
\mathscr{V}(t; \, r) \leq & C \, \Psi_{1,t} \, \|g^1\|_{L^p(Q_t)} \leq C \, \Psi_{1,t} \, (\|\hat{g}^0\|_{L^p(Q_t)} + \|(g^1)^{\prime}(u^*,\hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}) \, .
\end{align}
Here $\Psi_{1,t} = \Psi_1(t, \, (m^*(t))^{-1}, \, M^*(t), \, \|q^0\|_{ W^{2-\frac{2}{p}}_p(\Omega)}, \,
\mathscr{V}(t; \, q^*), \, [\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}, \, \|\nabla \varrho^*\|_{L^{p,\infty}(Q_t)})$, and $\bar{u} := (r, \, \sigma, \, w)$. We then apply the Proposition \ref{normvmain} to \eqref{linearT3second}, and we obtain that
\begin{align}\label{difference5}
\mathscr{V}(t; \, w) \leq & C \, \tilde{\Psi}_{2,t} \, \|f^1\|_{L^p(Q_t)} \leq C \, \tilde{\Psi}_{2,t} \, (\|\hat{f}^0\|_{L^p(Q_t)} + \|(f^1)^{\prime}(u^*,\hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}) \, .
\end{align}
Here $\tilde{\Psi}_{2,t} = \Psi_2(t, \, (m^*(t))^{-1}, \, M^*(t), \, \sup_{s\leq t} [\varrho^*(s)]_{C^{\alpha}(\Omega)}) \, (1+ \sup_{s\leq t} [\varrho^*(s)]_{C^{\alpha}(\Omega)})^{\frac{2}{\alpha}}$.
We next raise both \eqref{difference4} and \eqref{difference5} to the $p-$ power, add both inequalities, and get for the function $\mathscr{V}(t) := \mathscr{V}(t; \, r) + \mathscr{V}(t; \, w)$ an inequality
\begin{align*}
\mathscr{V}^p(t) \leq C \, (\Psi_{1,t}^p + \tilde{\Psi}_{2,t}^p) \, (& \|\hat{g}^0\|^p_{L^p(Q_t)} + \|\hat{f}^0\|^p_{L^p(Q_t)}\\
& + \|(g^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p + \|(f^1)^{\prime}(u^*, \, \hat{u}^0) \, \bar{u}\|_{L^p(Q_t)}^p) \, .
\end{align*}
Then we apply Lemma \ref{RightHandControlsecond} and find
\begin{align*}
\mathscr{V}^p(t) \leq & C \, (\Psi_{1,t}^p + \tilde{\Psi}_{2,t}^p) \, (\|\hat{g}^0\|^p_{L^p(Q_t)} + \|\hat{f}^0\|^p_{L^p(Q_t)} + K_2^*(t) \, \int_0^t K^*_1(s) \, \mathscr{V}^p(s) \, ds) \, .
\end{align*}
The Gronwall Lemma implies that
\begin{align*}
\mathscr{V}^p(t) \leq C \, (\Psi_{1,t}^p + \tilde{\Psi}_{2,t}^p) \, \exp(C \, (\Psi_{1,t}^p + \tilde{\Psi}_{2,t}^p) \, K^*_{2}(t) \, \int_0^t K^*_1(s)\, ds) \, (\|\hat{g}^0\|^p_{L^p(Q_t)} + \|\hat{f}^0\|^p_{L^p(Q_t)}) \, .
\end{align*}
We thus have proved the following continuity estimate:
\begin{prop}\label{estimateselfsecond}
Suppose that $(r^*, \, w^*), \, (r, \, w) \in \phantom{}_0\mathcal{Y}_T$ are solutions to $(r, \, w) = \mathcal{T}^1(r^*, \, w^*)$. Then there is a continuous function $\Psi^9$ increasing in its arguments such that, for all $t \leq T$,
\begin{align*}
\mathscr{V}(t) \leq & \Psi_9(t, \, \|\hat{u}^0\|_{\mathcal{X}_t}+ \|\hat{\varrho}^0\|_{W^{2,0}_p(Q_t)} + \|\tilde{b}\|_{W^{1,0}_p(Q_t)}+ \|\bar{b}\|_{L^p(Q_t)}, \, \mathscr{V}^*(t)) \times \\
& \times (\|\hat{g}^0\|_{L^p(Q_t)} + \|\hat{f}^0\|_{L^p(Q_t)}) \, .
\end{align*}
\end{prop}
We are now in the position to prove a self-mapping property for sufficiently 'small data' applying the Lemma \ref{selfmapTsecond}.
\begin{lemma}
There is $R_1 > 0$ such that if $\|\hat{g}^0\|_{L^p(Q_T)} + \|\hat{f}^0\|_{L^p(Q_T)} \leq R_1$, the map $\mathcal{T}^1$ is well defined and possesses a unique fixed-point.
\end{lemma}
\begin{proof}
We apply the Lemma \ref{selfmapTsecond} with $\Psi(T, \, R_0, \, R_1, \, \eta) := \Psi_9(T, \, R_0, \, \eta) \, R_1$. Here $R_0 = \|\hat{u}^0\|_{\mathcal{X}_T}+ \|\hat{\varrho}^0\|_{W^{2,0}_p(Q_T)} + \|\tilde{b}\|_{W^{1,0}_p(Q_T)} + \|\bar{b}\|_{L^p(Q_T)}$.
Thus, there is $R_1 > 0$ such that if $\|\hat{g}^0\|_{L^p(Q_T)} + \|\hat{f}^0\|_{L^p(Q_T)} \leq R_1$, we can find $\eta_0 > 0$ such that $\mathcal{T}^1$ maps the set $\{\bar{u} \in \phantom{}_0\mathcal{Y}_T \, : \, \|\bar{u}\|_{\mathcal{Y}_T} \leq \eta_0\}$ into itself.
Consider the iteration $\bar{u}^{n+1} := \mathcal{T}^1(\bar{u}^n)$ starting at $\bar{u}^n = 0$. The sequences $(r^{n}, \, \sigma^n, \, w^n)$, and thus also $(\hat{q}^0+r^n, \, \mathscr{C}(\hat{v}^0 + w^n ), \, \hat{v}^0 + w^n)$, are uniformly bounded in $\mathcal{X}_T$. We show the contraction property with respect to the same lower-order norm than in Theorem \ref{iter}. There are $k_0, \, p_0 > 0$ such that the quantities
\begin{align*}
& E^n(t) := p_0 \, \int_{t}^{t+t_1} \{|\nabla (r^{n}-r^{n-1})|^2 + |\nabla (w^{n}-w^{n-1})|^2\} \, dxds\\
&+ k_0 \, \sup_{\tau \in [t, \, t + t_1]} \{\|(r^n-r^{n-1})(\tau)\|_{L^2(\Omega)}^2 + \|(\sigma^n-\sigma^{n-1})(\tau)\|_{L^2(\Omega)}^2 +\|(w^n-w^{n-1})(\tau)\|_{L^2(\Omega)}^2\}
\end{align*}
satisfy $E^{n+1}(t) \leq\tfrac{1}{2} \, E^{n}(t)$ for some fixed $t_1 > 0$ and every $t \in [0, \, T-t_1]$.
\end{proof}
In order to finish the proof of Theorem \ref{MAIN3}, we want to show how to make $\|\hat{g}^0\|_{L^p(Q_T)} + \|\hat{f}^0\|_{L^p(Q_T)}$ small. We observe that $\hat{g}^0 = \widetilde{ \mathscr{A}}^1(\hat{u}^0)$ and that $\hat{f}^0 = \mathscr{A}^3(\hat{u}^0)$. Thus, if an equilibrium solution to $\mathscr{A}(u^{\text{eq}}) = 0$ is at hand, we can expect that $\mathscr{A}(\hat{u}^0) = \mathscr{A}(\hat{u}^0) - \mathscr{A}(u^{\text{eq}})$ will remain small if the initial data are near to the equilibrium solution.
We thus consider $u^{\text{eq}} = (q^{\text{eq}}, \, \varrho^{\text{eq}}, \, v^{\text{eq}}) \in W^{2,p}(\Omega; \, \mathbb{R}^{N-1}) \times W^{1,p}(\Omega) \times W^{2,p}(\Omega; \, \mathbb{R}^{3})$ an equilibrium solution. This means that the equations \eqref{massstat}, \eqref{momentumsstat} are valid with the vector $\rho^{\text{eq}}$ of partial mass densities obtained from $q^{\text{eq}}$ and $\varrho^{\text{eq}}$ by means of the transformation of Section \ref{changevariables}.
\begin{lemma}
Suppose that $u^{\text{eq}} \in W^{2,p}(\Omega; \, \mathbb{R}^{N-1}) \times W^{1,p}(\Omega) \times W^{2,p}(\Omega; \, \mathbb{R}^{3})$ is an equilibrium solution. Moreover, we assume that the initial data $u^0$ belongs to $\text{Tr}_{\Omega \times\{0\}} \, \mathcal{X}_T$. We assume that the components $\varrho^{\text{eq}}, \, \varrho^0$ and $v^0$ of $u^{\text{eq}}$ and $u^0$ possess the additional regularity
\begin{align}\label{moreregu}
\varrho^{\text{eq}}, \, \varrho^0 \in W^{2,p}(\Omega), \quad v^{\text{eq}} \in W^{3,p}(\Omega; \, \mathbb{R}^3), \, v^0 \in W^{2,p}(\Omega; \, \mathbb{R}^3) \, .
\end{align}
Then, there exists $R_1 > 0$ such that if $\|u^{\text{eq}}-u^0\|_{\text{Tr}_{\Omega \times\{0\}} \, \mathcal{X}_T} \leq R_1$, then there is a unique global solution $u \in \mathcal{X}_T$ to $\mathscr{A}(u) = 0$ and $u(0) = u_0$.
\end{lemma}
\begin{proof}
We denote $u^1 := u^{\text{eq}}-u^0 \in \text{Tr}_{\Omega \times\{0\}} \, \mathcal{X}_T$. We find extensions $\hat{q}^1 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$ and $\hat{v}^1 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{3})$ with continuity estimates. For instance, we can extend the components of $q^1, \, v^1$ to elements of $W^{2-2/p}_{p}(\mathbb{R}^3)$, and then solve Cauchy-problems for the heat equation to extend the functions. Since the assumption \eqref{moreregu} moreover guarantees that $v^1 \in W^{2,p}(\Omega)$, this procedure yields even $\hat{v}^1 \in W^{4,2}_p(Q_T; \, \mathbb{R}^3)$ at least (cf. \cite{ladu}, Chapter 4, Paragraph 3, inequality (3.3)).
The definitions $\hat{q}^{\text{eq}}(x, \, t) := q^{\text{eq}}(x)$ and $\hat{v}^{\text{eq}}(x, \, t) := v^{\text{eq}}(x)$ provide extensions of $\hat{q}^{\text{eq}} \in W^{2,\infty}_{p,\infty}$ and $\hat{v}^{\text{eq}}$ in $W^{3,\infty}_{p,\infty}$. We define
\begin{align*}
\hat{q}^0 := \hat{q}^{\text{eq}} + \hat{q}^1 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}), \quad \hat{v}^0 := \hat{v}^{\text{eq}} + \hat{v}^1 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1}) \cap W^{3,0}_p(Q_T; \, \mathbb{R}^3) \, ,
\end{align*}
satisfying
\begin{align}\label{distance}
& \|\hat{q}^0 - \hat{q}^{\text{eq}}\|_{W^{2,1}_p(Q_T)} + \|\hat{v}^0 - \hat{v}^{\text{eq}}\|_{W^{2,1}_p(Q_T)} \leq C \, (\|q^1\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|v^1\|_{W^{2-\frac{2}{p}}_p(\Omega)}) = C \, R_1 \, ,\\
& \label{extendbetter}
\|\hat{v}^0\|_{W^{3,0}_p(Q_T; \, \mathbb{R}^3)} \leq C \, (\|v^{\text{eq}}\|_{W^{3,p}(\Omega)} + \|v^0\|_{W^{2,p}(\Omega)}) \, .
\end{align}
In order to extend $\varrho^0$, we solve $\partial_t \hat{\varrho}^0 + \divv( \hat{\varrho}^0 \, \hat{v}^0) = 0$ with initial condition $\hat{\varrho}^0 = \varrho^0$. We clearly obtain by these means an extension of class $W^{1,1}_{p,\infty}(Q_T)$. Moreover, due to \eqref{extendbetter}, we can show that $\hat{\varrho}^0 \in W^{2,0}_p(Q_T)$ (use the representation formula at the beginning of the proof of Prop. \ref{solonnikov2}).
If we next choose the extension $ \hat{\varrho}^{\text{eq}}(x, \, t) := \varrho^{\text{eq}}(x) \in W^{2,\infty}_{p,\infty}(Q_T)$, then by definition of the equilibrium solution we have $\divv (\hat{\varrho}^{\text{eq}} \, \hat{v}^{\text{eq}}) = 0$ in $Q_T$ and $\partial_t \hat{\varrho}^{\text{eq}} = 0$.
Thus, the difference $\hat{\varrho}^1 := \hat{\varrho}^0 - \hat{\varrho}^{\text{eq}}$ is a solution to $ \partial_t \hat{\varrho}^1 + \divv (\hat{\varrho}^1 \, \hat{v}^0) = - \divv(\hat{\varrho}^0 \, \hat{v}^{1})$. Since $\hat{\varrho}^0 \in W^{1,1}_{p,\infty}(Q_T) \cap W^{2,0}_p(Q_T)$ by construction, the estimate of Lemma \ref{PerturbConti} applies, and invoking also \eqref{distance} this gives
\begin{align*}
\|\hat{\varrho}^1\|_{W^{1,1}_{p,\infty}(Q_T)} \leq & C \, \exp(c \, \int_0^T [\|\hat{v}^0_x\|_{L^{\infty}(\Omega)} + \|\hat{v}^0_{x,x}\|_{L^p(\Omega)} + 1] ds) \times\\
& \times
(\|\hat{\varrho}^0\|_{W^{2,0}_p(Q_T)}^p \, \|\hat{v}^1\|_{L^{\infty}(Q_T)}^p + \|\hat{\varrho}^0\|_{W^{1,1}_{p,\infty}(Q_T)}^p \, \|\hat{v}^1\|_{W^{2,0}_p(Q_T)}^p)\\
\leq & C_T \, \|\hat{v}^1\|_{W^{2,1}_p(Q_T)} \leq C_T \, \, R_1 \, .
\end{align*}
The latter and \eqref{distance} now entail that
\begin{align*}
\|\hat{q}^0 - \hat{q}^{\text{eq}}\|_{W^{2,1}_p(Q_T)} + \|\hat{v}^0 - \hat{v}^{\text{eq}}\|_{W^{2,1}_p(Q_T)} + \|\hat{\varrho}^0-\hat{\varrho}^{\text{eq}}\|_{W^{1,1}_{p,\infty}(Q_T)} \leq C \, R_1 \, .
\end{align*}
Here $C$ is allowed to depend on $T$ and all data in their respective norm. Now recalling the Lemma \ref{IMAGESPACE} we can verify that
\begin{align*}
\widetilde{ \mathscr{A}}(\hat{u}^0) = & \widetilde{ \mathscr{A}}(\hat{u}^{\text{eq}} + \hat{u}^1) = \widetilde{ \mathscr{A}}(\hat{u}^{\text{eq}} + \hat{u}^1) - \widetilde{ \mathscr{A}}(\hat{u}^{\text{eq}})= \int_{0}^1 \widetilde{ \mathscr{A}}^{\prime}(\hat{u}^{\text{eq}} + \theta \, \hat{u}^1) \, d\theta \, \hat{u}^1 \, .
\end{align*}
Thus $ \|\widetilde{ \mathscr{A}}(\hat{u}^0)\|_{\mathcal{Z}_T} \leq C \, R_1$. The definitions of $\hat{g}^0$ and $\hat{f}^0$ in \eqref{Arightlinear2} show that
\begin{align*}
\|\hat{g}^0\|_{L^p(Q_T)} + \|\hat{f}^0\|_{L^p(Q_T)} = \|\widetilde{ \mathscr{A}}^1(\hat{u}^0)\|_{L^p(Q_T)} + \|\widetilde{ \mathscr{A}}^3(\hat{u}^0)\|_{L^p(Q_T)} \leq C \, R_1 \, .
\end{align*}
The claim follows from the Lemma \ref{estimateselfsecond}.
\end{proof}
\appendix
\section{Examples of free energies}\label{Legendre}
1. We consider first $h(\rho) := \sum_{i=1}^N n_i \, \ln \frac{n_i}{n^{\text{ref}}}$ where for $i = 1,\ldots,N$, the mass and number densities are related via $m_i \, n_i = \rho_i$ with a positive constant $m_i > 0$. We want to show that $h$ is a \emph{Legendre function} on $\mathbb{R}^N_+$. It is at first clear that $h$ is continuously differentiable on $\mathbb{R}^N_+$, and we even have $h \in C^{\infty}(\mathbb{R}^N_+)$. The strict convexity of $h$ is obviously inherited from the strict convexity of $t \mapsto t \, \ln t$ on $\mathbb{R}_+$. The gradient of $h$ is given by
\begin{align*}
\partial_{\rho_i} h (\rho) = \frac{1}{m_i} \, (1+ \ln \frac{n_i}{n^{\text{ref}}}) \, .
\end{align*}
Thus $\lim_{k \rightarrow \infty} |\nabla_{\rho} h(\rho^k)| = + \infty$ whenever $\{\rho^k\}_{k \in \mathbb{N}}$ is a sequence of points approaching the boundary of $\mathbb{R}^N_+$. Overall we have shown that $h$ is a continuously differentiable, strictly convex, essentially smooth function on $\mathbb{R}^N_+$ (where \emph{essentially smooth} precisely means the blow-up of the gradient on the boundary). Functions satisfying these properties are called of \emph{Legendre type} (cf. \cite{rockafellar}, page 258).
Moreover, we can directly show that the gradient of $h$ is surjective onto $\mathbb{R}^N$, since the equations $\partial_{\rho_i} h = \mu_i$ have the unique solution $\rho_i = m_i \, n^{\text{ref}} \, e^{m_i \, \mu_i -1}$ for arbitrary $\mu \in \mathbb{R}^N$.
2. The second example is $h(\rho) = F\left(\sum_{i=1}^N n_i \, \bar{v}_i^{\text{ref}}\right) + \sum_{i=1}^N n_i \, \ln \frac{n_i}{n}$, with the total number density $n = \sum_{j=1}^N n_j$. Here $F$ is a given convex function of class $C^2(\mathbb{R}_+)$. We assume that
\begin{itemize}
\item $F^{\prime\prime}(t) > 0$ for all $t > 0$;
\item $F^{\prime}(t) \rightarrow -\infty$ for $t \rightarrow 0$;
\item $\frac{1}{t} \, F(t) \rightarrow + \infty$ for $t \rightarrow +\infty$.
\end{itemize}
In other words, $F$ is a co-finite function of Legendre type on $\mathbb{R}_+$. The numbers $\bar{v}^{\text{ref}}_i$ are positive constants. Choosing $\bar{v}_1^{\text{ref}} = \ldots = \bar{v}_N^{\text{ref}} = 1$ and $F(t) = t \, \ln t$ we recover the preceding example.
The function $h$ is clearly of class $C^2(\mathbb{R}^N_+)$. We compute the derivatives
\begin{align*}
\partial_{\rho_i}h(\rho) = & F^{\prime}(v \cdot \rho) \, v_i + \frac{1}{m_i} \, \ln \frac{n_i}{n}\, ,\\
\partial_{\rho_i,\rho_j}h(\rho) = & F^{\prime\prime}(v \cdot \rho) \, v_i \, v_j+ \frac{1}{m_i\, m_j} \, (\frac{\delta_{i,j}}{n_j} - \frac{1}{n}) \, ,
\end{align*}
in which we have for simplicity set $v_i := \bar{v}^{\text{ref}}_i/m_i$. For $\xi \in \mathbb{R}^N$, we verify that
\begin{align*}
D^2h(\rho) \xi \cdot \xi = F^{\prime\prime}(v \cdot \rho) \, (v \cdot \xi)^2 + \sum_{i=1}^N \left(\frac{\xi_i}{\sqrt{n_i} \, m_i}\right)^2 - \frac{1}{n} \, \left(\sum_{i=1}^N \frac{\xi_i}{m_i}\right)^2 \, .
\end{align*}
With the Cauchy-Schwarz inequality, we see that $\sum_{i=1}^N \left(\frac{\xi_i}{\sqrt{n_i} \, m_i}\right)^2 - \frac{1}{n} \, \left(\sum_{i=1}^N \frac{\xi_i}{m_i}\right)^2 \geq 0$, with equality only if $\xi_i = \lambda \, n_i \, m_i$ for some $\lambda \in \mathbb{R}$. In this case however, we have $\xi \cdot v = \lambda \, \sum_{i=1}^N \rho_i \, v_i$, so that $ D^2h(\rho) \xi \cdot \xi = \lambda^2 \, F^{\prime\prime}(v \cdot \rho) \, (v \cdot \rho)^2 \geq 0$, with equality only if $\lambda = 0$. This proves that $ D^2h(\rho) \xi \cdot \xi > 0$ for all $\xi \in \mathbb{R}^N \setminus \{0\}$, which implies the strict convexity.
In order to show that $h$ is essentially smooth, we consider a sequence $\{\rho^k\}_{k \in \mathbb{N}}$ approaching the boundary of $\mathbb{R}^N_+$. We first consider the case that $\rho^k$ does not converge to zero. In this case, we clearly have $\inf_{k \in \mathbb{N}}\{n^k, \, v \cdot \rho^k\} \geq c_0$ for some positive constant $c_0$. Thus
\begin{align*}
|\nabla_{\rho} h(\rho^k)| \geq & \frac{1}{\max m} \, \sup_{i=1,\ldots,N} |\ln n_i^k| - |v| \, \sup_{k} |F^{\prime}(v \cdot \rho^k)| - \frac{1}{\min m} \sup_{k} |\ln n^k|\\
\geq & \frac{1}{\max m} \, \sup_{i=1,\ldots,N} |\ln n_i^k| - C \rightarrow + \infty \, .
\end{align*}
The second case is that $\rho^k$ converges to zero. In this case the fractions $\frac{n^k_i}{n^k}$ might remain all bounded. But our assumptions on $F$ guarantee that $F^{\prime}(v \cdot \rho^k) \rightarrow - \infty$ so that $|\nabla_{\rho} h(\rho^k)| \rightarrow + \infty$. Thus, the function $h$ is essentially smooth, and a function of Legendre type on $\mathbb{R}^N_+$.
It remains to prove that $\nabla_{\rho} h$ is a surjective mapping. We first verify that $h$ is co-finite. In the present context, it is sufficient to show that $\lim_{\lambda \rightarrow +\infty} \frac{h(\lambda \, y)}{\lambda} = + \infty$ for all $y \in \mathbb{R}^N_+$. This follows directly from the fact that $\lim_{t \rightarrow +\infty} \frac{F(t)}{t} = + \infty$. We then infer the surjectivity of $\nabla_{\rho} h$ form Corollary 13.3.1 in \cite{rockafellar}.
3. Similar arguments allow to deal with the case $h(\rho) = \sum_{i=1}^N K_i \, n_i\, \bar{v}_i^{\text{ref}} \, ((n_i\, \bar{v}_i^{\text{ref}})^{\alpha_i-1} + \ln (n_i\, \bar{v}_i^{\text{ref}})) + k_B \, \theta \, \sum_{i=1}^N n_i \, \ln \frac{n_i}{n}$.
\section{Proof of the Lemma \ref{A1linUMFORM}}\label{technique}
The argument is based on covering $Q_t$, $t >0$, with sufficiently small sets and localising the problem therein. This is essentially carried over by standard techniques of meshing, so we will spare these rather technical considerations. For every $r > 0$, we can find $m = m(r) \in \mathbb{N}$ and, for each $j= 1,\ldots,m$, a point $(x^j, \, t_j) \in Q_t$ and sets $Q^j$ that possess the following properties:
\begin{itemize}
\item $\overline{Q_t} \subset \bigcup_{j=1}^m Q^j$;
\item $\sup_{(x, \, t) \in Q^j} |t-t_j| \leq c \, r$ and $\sup_{(x, \, t) \in Q^j} |x-x^j| \leq c \, \sqrt{r}$;
\item $Q^j$ intersects a finite number, not larger than some $m_0 \in \mathbb{N}$, of elements of the collection $Q^1, \ldots, Q^m$. Here $m_0$ is independent on $r$ and $t$.
\end{itemize}
For $j = 1,\ldots,m$, we can moreover choose a non-negative function $\eta^j \in C^{2,1}(Q^j)$ with support in $Q^j$. The family $\eta^1,\ldots,\eta^m$ is assumed to nearly provide a partition of unity, that is, to possess the following properties:
\begin{align*}\begin{split}
& c_0 \leq \sum_{j=1}^m \eta^j(x, \, t) \leq C_0 \text{ for all } (x, \, t) \in \overline{Q}_t\\
& \|\eta_x\|_{L^{\infty}(Q^j)} \leq C_1 \, r^{-\frac{1}{2}}, \quad \|\eta_t\|_{L^{\infty}(Q^j)} + \|\eta_{x,x}\|_{L^{\infty}(Q^j)} \leq C_2 \, r^{-1} \, .
\end{split}
\end{align*}
Here $c_0 > 0$ and $C_i$ ($i=0,1,2$) are constants independent on $r$ and $t$. Moreover we can also enforce that $\nu \cdot \nabla \eta^j = 0$ on $S^j = : Q^j \cap (\partial \Omega \times [0, \, + \infty[)$.
We let $\Omega^j := Q^j \cap (\Omega \times\{0\})$.
After inversion of $R_q^*$, the vector field $q$ satisfies the equations \eqref{petrovskisyst}, that is
\begin{align}\label{Petrovski2}
q_t - \underbrace{[R_q^*]^{-1} \, \widetilde{M}^*}_{=:A^*} \, \triangle q = [R_q^*]^{-1} \, g + [R_q^*]^{-1} \, \nabla \widetilde{M}^* \cdot \nabla q \, =: \tilde{g} \, .
\end{align}
Multiplying \eqref{Petrovski2} with $\eta^j$, we next derive the identities
\begin{align}\label{transformee}
\eta^j \, q_t - \eta^j \, A^j \, \triangle q = & \eta^j \, \tilde{g} + \eta^j \, (A^* - A^j )\, \triangle q, \qquad \quad A^j := [R_q^*]^{-1} \, \widetilde{M}^*(x^j, \, t_j) \, .
\end{align}
Making use of the Lemma \ref{ALGEBRA}, the eigenvalues $p_1^j, \ldots, p_{N-1}^j$ of $A^j$ are real and strictly positive. Recalling \eqref{EVPROD}, we have on $[0, \, t]$ the bound
\begin{align*}
\frac{\lambda_{0}(t, \, \widetilde{M}^*)}{\lambda_{1}(t, \, R_q^*)} \leq p_i^j \leq \frac{\lambda_{1}(t, \, \widetilde{M}^*)}{\lambda_{0}(t, \, R_q^*)} \, .
\end{align*}
Further, there exists a basis $\xi^{1}, \, \ldots, \xi^{N-1} \in \mathbb{R}^{N-1}$ of eigenvectors of $A^j$.
For $i = 1,\ldots, N-1$, we multiply the equation \eqref{transformee} with $\xi^i$. For $u_{i} := \xi^{i} \cdot q$ ($i = 1,\ldots, N-1$) we therefore obtain that
$\eta^j \, (u_{i,t} - p_i^j \, \triangle u_i) = \eta^j \, \xi^i \cdot (\tilde{g} + (A^* - A^j )\, \triangle q )$. We define $\tilde{u}^j_i := u_i \, \eta^j$, and for this function we obtain that
\begin{align*}
\tilde{u}^j_{i,t} - p_i^j \, \triangle \tilde{u}^j_i = & h^j_i := \eta^j \, \xi^i \cdot (\tilde{g} + (A^* - A^j )\, \triangle q ) + \eta^j_t \, u_i - p_i^j \, (2 \, u_{i,x} \cdot \eta^j_x + \triangle \eta^j \, u_i) \, .
\end{align*}
Recall that $\nu \cdot \nabla q = 0$ on $S_T$ and $q(\cdot, \, 0) = q_0$ in $\Omega$. Due to our restrictions on the choice of $\eta^j$, we then readily compute that $\nu \cdot \nabla \tilde{u}^j_i = \nu \cdot \nabla \eta^j \, \xi^i \cdot q = 0$ on $S_t$. Moreover, $\tilde{u}^i_j(0) = \eta^j(0) \, \xi^i \cdot q_0 =: \tilde{u}^{0,j}_i$ in $\Omega$. Since $q^0$ satisfies the initial compatibility condition, also $\tilde{u}^{0,j}_i$ is a compatible data. Standard results for the heat equation now yield for arbitrary $t \leq T$
\begin{align}\label{estimatetildeuij}
\|\tilde{u}^j_i\|_{W^{2,1}_p(Q_t)} \leq C_1 \, \frac{\max\{1, \, p^j_i\}}{\min\{1, \, p^j_i\}} \, (p^j_i \, \|\tilde{u}^{0,j}_i\|_{W^{2-\frac{2}{p}}_p(\Omega)} + \|h^j_i\|_{L^p(Q_t)}) \, ,
\end{align}
where $C_1$ depends only on $\Omega$ (see the Remark \ref{Tindependence}). In order to estimate $\|h^j_i\|_{L^p(Q_t)}$, we introduce $Q^j_t := Q^j \cap Q_t$ and observe that
\begin{align}\label{estimatehij1}
\|\eta^j \, \xi^i \cdot \tilde{g}\|_{ L^p(Q_t)} \leq & C_0 \, \| \tilde{g}\|_{ L^p(Q^j_t)} \nonumber \, , \\
\|(\eta^j_t - p_i^j \,\triangle \eta^j) \, u_i\|_{ L^p(Q_t)} \leq & C \, (1+\lambda_{\max}(A_j)) \, r^{-1} \, \|q\|_{L^p(Q^j_t)} \, ,\nonumber\\
2 \, p_i^j \, \|u_{i,x} \cdot \eta^j_x\|_{ L^p(Q_t)} \leq & C \, \, \lambda_{\max}(A_j) \, r^{-1/2} \, \|q_x\|_{L^p(Q^j_t)} \leq C \, \lambda_{\max}(A_j) \, (1+r^{-1}) \,\|q_x\|_{L^p(Q^j_t)} \, , \nonumber\\
\|\eta^j \, \xi^i \, (A^* - A^j )\, \triangle q \|_{L^p(Q_t)} \leq & C \, \|A^* - A^j\|_{L^{\infty}(Q^j_t)} \, \|D^2q\|_{L^p(Q^j_t)} \, .
\end{align}
Since, by definition, $A^j = [R_q^*]^{-1} \, \widetilde{M}^*(x^j, \, t_j) = A^*(x^j, \, t_j)$, we have
\begin{align}\label{estimatehoelderprod}
\|A^* - A^j\|_{L^{\infty}(Q^j_t)} \leq \Big[[R_q^*]^{-1} \, \widetilde{M}^*\Big]_{C^{\beta,\frac{\beta}{2}}(Q_t)} \, r^{\frac{\beta}{2}} \, .
\end{align}
We call $F: \mathbb{R}_+ \times \mathbb{R}^{N-1} \rightarrow \mathbb{R}^{N-1}\times \mathbb{R}^{N-1}$ the map $(s, \, \xi) \mapsto [R_q(s, \, \xi)]^{-1} \, \widetilde{M}(s, \, \xi)$. The derivatives of $F$ satisfy the estimates
\begin{align*}
& |\partial_s F(\varrho^*, \, q^*)| \leq \frac{\lambda_{\max}( \widetilde{M}^*)}{\lambda_{\min}^2(R_q^*)} \, |R_{q,\varrho}(\varrho^*, \, q^*)| + \frac{1}{\lambda_{\min}(R_q^*)} \, |\widetilde{M}_{\varrho}(\varrho^*, \, q^*)| \, ,\\
& |\partial_{\xi}F(\varrho^*, \, q^*)| \leq \frac{\lambda_{\max}( \widetilde{M}^*)}{\lambda_{\min}^2(R_q^*)} \, |R_{q,q}(\varrho^*, \, q^*)| + \frac{1}{\lambda_{\min}(R_q^*)} \, |\widetilde{M}_{q}(\varrho^*, \, q^*)| \, ,\\
& |\partial_s F(\varrho^*, \, q^*)| + |\partial_{\xi}F(\varrho^*, \, q^*)| \leq \left(\frac{\lambda_{1}(t,\, \widetilde{M}^*)}{\lambda_{0}^2(t, \, R_q^*)} + \frac{1}{\lambda_{0}(t, \, R_q^*)}\right) \, (L^*(t, \, R_q) + L^*(t, \, \widetilde{M})) =: \ell^*_t \, .
\end{align*}
By standard arguments, we have
\begin{align}\label{estimatehoelderprod2}
\Big[[R_q^*]^{-1} \, \widetilde{M}^*\Big]_{C^{\beta,\frac{\beta}{2}}(Q_t)} \leq \ell^*_t \, ([\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}) \, \, .
\end{align}
Combining \eqref{estimatetildeuij}, \eqref{estimatehij1}, \eqref{estimatehoelderprod} and \eqref{estimatehoelderprod2}, we get
\begin{align*}
& \|\tilde{u}^j_i\|_{W^{2,1}_p(Q_t)} \leq \tilde{C}_1\, \frac{\max\{1, \, \lambda_{\max}(A^j)\}}{\min\{1, \, \lambda_{\min}(A^j)\}}\, \times \\
& \Big( \big[(1+\lambda_{\max}(A^j)) \, (1+r^{-1}) \, \{\|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega^j; \, \mathbb{R}^{N-1})} +\|q\|_{W^{1,0}_p(\tilde{Q}^j)} \} + \| \tilde{g}\|_{ L^p(\tilde{Q}^j)}\big] \\
& \quad + \ell^*_t \, ([\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}) \, r^{\frac{\beta}{2}} \, \|D^2q\|_{L^p(\tilde{Q}^j)} \Big)\, .
\end{align*}
Recall now that $\tilde{u}^j_i = \eta^j \, \xi^i \cdot q = \xi^i \cdot w^j$ with $w^j = \eta^j \, q$. Here $\{\xi^i\}$ are eigenvectors of $A^j$ and form a basis of $\mathbb{R}^{N-1}$. It is moreover shown in the Lemma \ref{ALGEBRA} that there are orthonormal vectors $v^1,\ldots,v^{N-1}$ such that $\xi^i = B^j \, v^i$ for $i =1,\ldots,N-1$, where $B^j := [R_q(\varrho^*(x^j,t_j), \, q^*(x^j,\, t_j))]^{\frac{1}{2}}$. Thus, $w^j \cdot \xi^i = B^j w^j \cdot v^i$. For any norm $\|\cdot \| $ on vector fields of length $N-1$ defined on $Q_t$ we then have
\begin{align*}
& \|q\, \eta^j\| = \|w^j\| = \|[B^j]^{-1} \, B^j w^j \| \leq |[B^j]^{-1}|_{\infty} \, \|B^j w^j\| = |[B^j]^{-1}|_{\infty} \, \|\sum_{i=1}^{N-1} (v^i \cdot B^j w^j) \, v^i\| \\
& \quad \leq c \, |[B^j]^{-1}|_{\infty} \, \sum_{i=1}^{N-1} \|B^j \, w^j \cdot v^i\| = c \, |[B^j]^{-1}|_{\infty} \, \sum_{i=1}^{N-1} \|w^j \cdot \xi^i\|\\
& \quad \leq \frac{c_1}{[\lambda_{\min}(R_q(\varrho^*(x^j,t_j), \, q^*(x^j,\, t_j)))]^{\frac{1}{2}}} \, \sum_{i=1}^N \|\tilde{u}^j_i\|\leq
\frac{c_1}{[\lambda_{0}(t, \, R_q^*)]^{\frac{1}{2}}} \, \sum_{i=1}^N \|\tilde{u}^j_i\| \, .
\end{align*}
Choosing $\|\cdot \| = \|\cdot \|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1})}$, it follows that
\begin{align*}
& \|q \, \eta^j\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1})} \leq \bar{C}_1 \,
\frac{\max\{1, \, \lambda_{\max}(A^j)\}}{[\lambda_{0}(t, \, R_q^*)]^{\frac{1}{2}}\, \min\{1, \, \lambda_{\min}(A^j)\}} \times\\
& \Big(\, \big[(1+\lambda_{\max}(A^j)) \, (1+r^{-1}) \, \{\|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega^j; \, \mathbb{R}^{N-1})} +\|q\|_{W^{1,0}_p(Q^j_t)} \} + \| \tilde{g}\|_{ L^p(Q^j_t)}\big]\\
& \quad +\ell^*_t \, ([\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}) \, r^{\frac{\beta}{2}} \, \|D^2q\|_{L^p(Q^j_t)}\Big) \, .
\end{align*}
We estimate $\lambda_{\min}(A^j) \geq \lambda_0([R_q^*]^{-1}\widetilde{M}^*)$ etc. We recall that for each $j$ there are at most $m_0$ indices $i_1,\ldots,i_{m_0} \neq j$ such that $Q^j \cap Q^{i_k} \neq \emptyset $. Thus $\sum_{j=1}^m \|f\|_{L^p(Q^j_t)} \leq (m_0 + 1) \, \|f\|_{L^p(Q_t)}$. We easily verify that
\begin{align*}
\|q \, \eta^j\|_{W^{2,1}_p(Q_t)} \geq & \|q_t \, \eta^j\|_{L^p(Q_t)} +\sum_{0\leq \alpha\leq 2} \|D^{\alpha}_xq \, \eta^j\|_{L^p(Q_t)} - c \, r^{-\frac{1}{2}} \, \|q_x\|_{L^p(Q^j_t)} - c \, r^{-1} \, \|q\|_{L^p(Q^j_t)} \, .
\end{align*}
After summing up for $j = 1,\ldots,m$ and using the properties of our covering again, we obtain
\begin{align*}
& \|q\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1}))} \leq\bar{C}_1 \,
\frac{ \max\{1, \, \lambda_{1}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\}}{\lambda_{0}^{\frac{1}{2}}(t, \, R_q^*)\, \min\{1, \, \lambda_{0}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\}}\, \times\\
& \Big( \big[(1+\lambda_{1}(t, \, [R_q^*]^{-1}\widetilde{M}^*)) \, (1+r^{-1}) \, \{\|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1})} +\|q\|_{W^{1,0}_p(Q_t)} \} + \| \tilde{g}\|_{ L^p(Q_t)}\big]\\
& \quad + \, \ell^*_t \, ([\varrho^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)} + [q^*]_{C^{\beta,\frac{\beta}{2}}(Q_t)}) \, r^{\frac{\beta}{2}} \, \|D^2q\|_{L^p(Q_t)} \Big)\, .
\end{align*}
Since $r$ is a free parameter and $t$ fixed, we can choose
\begin{align*}
r^{\frac{\beta}{2}} := \frac{1}{2} \, \min\{1, \frac{\lambda_{0}^{\frac{1}{2}}(t, \, R_q^*)\, \min\{1, \, \lambda_{0}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\}}{\bar{C}_1 \, \max\{1, \, \lambda_{1}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\} \, \ell_t^* \, ([\varrho^*]_{C^{\beta}(Q_t)} + [q^*]_{C^{\beta}(Q_t)})}\}
\end{align*}
to obtain an estimate
\begin{align}\label{qcontrol}
& \frac{1}{2} \, \|q\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1})} \leq \bar{C}_1\, \frac{\max\{1, \, \lambda_{1}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\}}{\lambda_{0}^{\frac{1}{2}}(t, \, R_q^*)\, \min\{1, \, \lambda_{0}(t, \, [R_q^*]^{-1}\widetilde{M}^*)\}}\, \times \nonumber\\
& [(1+\lambda_{1}(t, \, [R_q^*]^{-1}\widetilde{M}^*)) \, (1+r^{-1}) \, \{\|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1})} +\|q\|_{W^{1,0}_p(Q_t)} \} + \| \tilde{g}\|_{ L^p(Q_t)} ] \, .
\end{align}
Due to the definition of $\tilde{g}$ in \eqref{Petrovski2}, and since $\nabla \widetilde{M}^* = \widetilde{M}_{\varrho}(\varrho^*, \, q^*) \, \nabla \varrho^* + \widetilde{M}_{q}(\varrho^*, \, q^*) \, \nabla q^*$ it follows that
\begin{align*}
\|\tilde{g}\|_{L^p(Q_T)} \leq & \frac{1}{\lambda_0(t, \, R_q^*)} \, (\|g\|_{L^p(Q_t)} + \|\nabla \widetilde{M}^* \cdot \nabla q\|_{L^p(Q_t)})\\
\leq & \frac{1}{\lambda_0(t, \, R_q^*)} \, (\|g\|_{L^p(Q_t)} + L(t, \, \widetilde{M}^*)\, [\|\nabla \varrho^* \cdot \nabla q\|_{L^p(Q_t)} + \|\nabla q^* \cdot \nabla q\|_{L^p(Q_t)}]) \, .
\end{align*}
We define $\phi^*_{0,t}$ and $\phi^*_{1,t}$ via \eqref{LinftyFACTORS}. Due to Lemma \ref{rhonewlemma} and Lemma \ref{Mnewlemma} on the coefficients $R$ and $\widetilde{M}$, we see that $\phi^*_{0,t}$ and $\phi^*_{1,t}$ are bounded by a continuous function of $\|\varrho^*\|_{L^{\infty}(Q_t)}$, of $\|q^*\|_{L^{\infty}(Q_t)}$ and of $[m^*(t)]^{-1}$. Moreover $\phi^*_{0,t}$ and $\phi^*_{1,t}$ is determined only by the eigenvalues of $R_q^*$ and $\widetilde{M}^*$ and their Lipschitz constants over the range of $\varrho^*$, $q^*$. In order to obtain a control on $\sup_{s\leq t} \|q(s)\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1})}$, we apply the inequality (3) of the paper \cite{solocompress} which yields
\begin{align*}
\sup_{s\leq t} \|q(s)\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1})} \leq \|q^0\|_{W^{2-\frac{2}{p}}_p(\Omega; \, \mathbb{R}^{N-1})} + C \, \|q\|_{W^{2,1}_p(Q_t; \, \mathbb{R}^{N-1})}
\end{align*}
for some constant $C$ independent on $t$, and combine it with \eqref{qcontrol}.
\begin{rem}\label{Tindependence}
Consider the problem $\lambda \, \partial_t u - \triangle u = f$ in $Q_t$ with $u(0, \, x) = u_0(x)$ in $\Omega$ and $\nu(x)\cdot \nabla u = 0$ on $S_t$. Then $\|u\|_{W^{2,1}_p(Q_t)} \leq C_{1} \, \frac{\max\{1, \, \lambda\}}{\min\{1,\lambda\}} \, (\|f\|_{L^p(Q_t)} + \|u_0\|_{W^{2-2/p}_p(\Omega)})$ with $C_{1}$ depending only on $\Omega$.
\end{rem}
\begin{proof}
We find an extension $\hat{u}_0 \in W^{2,1}_p(Q_{\infty})$ for $u_0$ such that $\|\hat{u}_0\|_{ W^{2,1}_p(Q_{\infty})} \leq c \, \|u_0\|_{W^{2-\frac{2}{p}}_p(\Omega)}$.
We then look for the solution $v$ to $\lambda \, \partial_t v - \triangle v = (f - \lambda \, \partial_t \hat{u}_0 + \triangle \hat{u}_0) \, \chi_{[0,t]} =: g$ in $\Omega \times \mathbb{R}_+$ with $v(x, \, 0) = 0$ in $\Omega$ and $\nu(x)\cdot \nabla v = 0$ on $\partial \Omega \times \mathbb{R}_+$. In order to solve this problem, we scale time defining $\tilde{v}(s, \, x) := v(\lambda \, s, \, x)$. Clearly $\partial_s \tilde{v} - \triangle \tilde{v} = \tilde{g}$ in $\Omega \times \mathbb{R}_+$ with $\tilde{v}(x, \, 0) = 0$ in $\Omega$ and $\nu(x)\cdot \nabla \tilde{v} = 0$ on $\partial \Omega \times \mathbb{R}_+$. Thus $\|\tilde{v}\|_{W^{2,1}_p(\Omega \times \mathbb{R}_+)} \leq C_1 \, \|\tilde{g}\|_{L^p(\Omega \times \mathbb{R}_+)}$ with $C_1$ depending only on $\Omega$. We rescale time, to obtain
\begin{align*}
\lambda^{1+1/p} \, \|\partial_tv\|_{L^p} + \lambda^{1/p} \, \sum_{0 \leq |\alpha|\leq 2} \|D^{\alpha}_x v\|_{L^p} \leq C_1 \, \lambda^{1/p} \, \|g\|_{L^p(\Omega \times \mathbb{R}_+)}\, .
\end{align*}
By the uniqueness theorem for the heat equation, we must have $u - \hat{u}_0 = v$ in $Q_t$. Since $g = 0$ on $]t, \, + \infty[$, it follows that
\begin{align*}
\|u\|_{W^{2,1}_p(Q_t)} \leq\|\hat{u}_0\|_{ W^{2,1}_p(Q_{t})} + C_1 \, \frac{1}{\min\{\lambda, \, 1\}} \, ( \|f\|_{L^p(Q_t)} + \lambda \, \|\partial_t \hat{u}_0\|_{L^p(Q_t)} + \|\triangle \hat{u}_0\|_{L^p(Q_t)}) \, .
\end{align*}
The claim follows.
\end{proof}
\section{Auxiliary statements}
\begin{lemma}\label{ALGEBRA}
Suppose that $A, \, B \in \mathbb{R}^{N\times N}$ are two positive definite symmetric matrices. Then $A \, B$ possesses only real, strictly positive eigenvalues and
\begin{align*}
\lambda_{\min}(A) \, \lambda_{\min}(B) \leq \lambda_{\min}(A \, B) \leq \lambda_{\max}(A \, B) \leq \lambda_{\max}(A) \, \lambda_{\max}(B) \
\end{align*}
Moreover, there are orthonormal vectors $\eta^1, \ldots, \eta^N \in \mathbb{R}^N$ such that the vectors $\xi^i := A^{\frac{1}{2}} \, \eta^i$ ($i = 1,\ldots,N$) define a basis of eigenvectors of $A \, B$ for $\mathbb{R}^N$.
\end{lemma}
\begin{proof}
Define $C := A^{\frac{1}{2}} \, B \, A^{\frac{1}{2}}$. Since $A^{\frac{1}{2}}$ is symmetric, and moreover the matrix $B$ is positive, it follows that $C$ is symmetric and positive. Thus, since $A \, B = A^{\frac{1}{2}} \, C \, A^{-\frac{1}{2}}$, the eigenvalues of $A \, B$ are the ones of $C$. Choose $\eta^1, \ldots, \eta^N \in \mathbb{R}^N$ an orthonormal basis of eigenvectors for $C$. Then $\xi^i := A^{\frac{1}{2}} \, \eta^i$ is an eigenvector of $A \, B$.
\end{proof}
\begin{lemma}\label{HOELDERlemma}
For $0 \leq \beta < \min\{1, 2-\frac{5}{p}\}$ we define
\begin{align*}
\gamma := \begin{cases}
\frac{1}{2} \, (2 - \frac{5}{p} -\beta) & \text{ for } 3<p<5\\
(1-\beta) \, \frac{p-1}{3+p} & \text{ for } 5 \leq p
\end{cases}
\end{align*}
Then, there is $C = C(t)$ bounded on finite time intervals such that $C(0) = C_0$ depends only on $\Omega$ and for all $q^* \in W^{2,1}_{p}(Q_t)$
\begin{align*}
\|q^*\|_{C^{\beta,\frac{\beta}{2}}(Q_t)} \leq \|q^*(0)\|_{C^{\beta}(\Omega)} + C(t) \, t^{\gamma} \, [\|q^*\|_{W^{2,1}_{p}(Q_t)} + \|q^*\|_{C([0,t]; \, W^{2-\frac{2}{p}}_p(\Omega)})] \, .
\end{align*}
\end{lemma}
\begin{proof}
For $r = \frac{3p}{(5-p)^+}$ and $\theta := \frac{3}{3+p-(5-p)^+}$ the Gagliardo-Nirenberg inequality yields
\begin{align*}
\|u\|_{L^{\infty}(\Omega)} \leq C_1 \, \|\nabla u\|_{L^r(\Omega)}^{\theta} \, \|u\|_{L^p(\Omega)}^{1-\theta} + C_2 \, \|u\|_{L^p(\Omega)} \, .
\end{align*}
We apply this inequality to a difference $u = a(t_2) - a(t_1)$ for $ 0 < t_1 \leq t_2 \leq t$. By elementary means $a(t_2) - a(t_1) = \int_{t_1}^{t_2} a_t(s) \, ds$ and $\|a(t_2) - a(t_1)\|_{L^p(\Omega)} \leq (t_2-t_1)^{1-\frac{1}{p}} \, \|\partial_ta\|_{L^{p}(Q_t)}$. This yields
\begin{align*}
\|a(t_2) - a(t_1)\|_{L^{\infty}(\Omega)} \leq & C_1 \, [2\|\nabla a\|_{L^{r,\infty}(Q_t)}]^{\theta} \, \|\partial_t a\|_{L^{p}(Q_t)}^{1-\theta} \, (t_2-t_1)^{(1-\theta) \, (1-\frac{1}{p})} \\
& + C_2 \, (t_2-t_1)^{1-\frac{1}{p}} \, \|\partial_t a\|_{L^{p}(Q_t)} \, .
\end{align*}
We define $\delta := (1-\theta) \, (1-\frac{1}{p})$, and we make use of the continuity of $W^{2-\frac{2}{p}}(\Omega) \subset W^{1,r}(\Omega)$ and we see that
\begin{align}\label{differencetime}
\sup_{t_2\neq t_1} \frac{\|a(t_2) - a(t_1)\|_{L^{\infty}(\Omega)}}{|t_2-t_1|^{\delta}} & \leq \|\partial_t a\|_{L^{p}(Q_t)}^{1-\theta} \, (2C_1 \, C \, \|a\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}^{\theta} + C_2 \, t^{\theta \, (1-\frac{1}{p})} \, \|\partial_t a\|_{L^{p}(Q_t)}^{\theta}) \nonumber\\
& \leq C(t) \, (\|a\|_{W^{2,1}_{p}(Q_t)} + \|a\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, .
\end{align}
Now we consider a function $u = u(x, \, s)$ such that $u(x, \, 0) = 0$. Using \eqref{differencetime} and the embedding $W^{2-\frac{2}{p}}_p(\Omega) \subset W^{1,r}(\Omega) \subset C^{\alpha}(\Omega)$ valid for $\alpha := \min\{1, \, 2-\frac{5}{p}\}$
\begin{align*}
\|u(s)\|_{C^0(\Omega)} = \|u(s)\|_{L^{\infty}(\Omega)} \leq C(t) \, (\|u\|_{W^{2,1}_{p}(Q_t)}+ \|u\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, s^{\delta}\\
\|u(s)\|_{C^{\alpha}(\Omega)} \leq C \, \|u(s)\|_{W^{1,r}(\Omega)} \leq C \, \|u\|_{C([0,s]; \, W^{2-\frac{2}{p}}_{p}(\Omega))} \, .
\end{align*}
Introduce $\alpha := \min\{1, \, 2-\frac{5}{p}\}$. First making use of interpolation inequalities (\cite{lunardi}, Example 1.25 with Corollary 1.24) and find for all $0 \leq \beta \leq \alpha \leq 1$ and $u \in C^1(\Omega)$
\begin{align}\label{interpo}
\|u\|_{C^{\beta}(\Omega)} \leq c \, \|u\|_{C^{\alpha}(\Omega)}^{\frac{\beta}{\alpha}} \, \|u\|_{C^0(\Omega)}^{1-\frac{\beta}{\alpha}} \, .
\end{align}
Thus, for $b := (1-\frac{\beta}{\alpha}) \, \delta$, it follows from \eqref{differencetime} and \eqref{interpo} that for all $s \leq t$
\begin{align*}
\|u(s)\|_{C^{\beta}(\Omega)} \leq C \, C(t)^{1-\frac{\beta}{\alpha}} \, (\|u\|_{W^{2,1}_{p}(Q_t)}+ \|u\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, s^{b} \, .
\end{align*}
For a function $q^* \in W^{2,1}_{p}(Q_t)$, this induces for all $\beta < \alpha$ a bound
\begin{align*}
\sup_{s\leq t} \|q^*(s)\|_{C^{\beta}(\Omega)} \leq & \|q^*(0)\|_{C^{\beta}(\Omega)} + \sup_{s\leq t} \|q^*(s)-q^*(0)\|_{C^{\beta}(\Omega)}\\
\leq & \|q^*(0)\|_{C^{\beta}(\Omega)} + C(t) \, t^{b} \, (\|q^*\|_{W^{2,1}_{p}(Q_t)} + \|q^*\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, .
\end{align*}
Moreover, we observe that $\beta < \alpha = \min\{1, \, 2-\frac{5}{p}\}$ always implies that $\beta/2 < \delta$. Thus, invoking \eqref{differencetime} again
\begin{align*}
\sup_{x\in \Omega} [q^*(x)]_{C^{\beta/2}([0,t])} = & \sup_{x\in \Omega} \sup_{t_2\neq t_1} \frac{|q^*(x, \, t_2) - q^*(x, \, t_1)|}{|t_2-t_1|^{\beta/2}}\\
\leq & C(t) \, (\|q^*\|_{W^{2,1}_{p}(Q_t)} +\|q^*\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, t^{\delta-\beta/2} \, .
\end{align*}
Thus for all $\beta < \alpha$ we get
\begin{align*}
\|q^*\|_{C^{\beta, \, \frac{\beta}{2}}(Q_t)} \leq \|q^*(0)\|_{C^{\beta}(\Omega)} + C(t) \, t^{\gamma} \, (\|q^*\|_{W^{2,1}_{p}(Q_t)} +\|q^*\|_{C([0,t]; \, W^{2-\frac{2}{p}}(\Omega))}) \, .
\end{align*}
with $\gamma$ being the minimum of $\delta - \beta/2$ and $b = (1-\frac{\beta}{\alpha}) \, \delta$. The computation of the exponent is straightforward.
\end{proof}
We now prove some properties of the lower--order operators defined in \eqref{A1right}, \eqref{A3right}. Consider first $g = g(x, \, t, \, q, \,\varrho, \, v, \, \nabla q, \, \nabla v,\, \nabla \varrho)$ with $g$ defined by \eqref{A1right}. The vector field $g$ is defined on the compound $\Gamma := Q \times \mathbb{R}^{N-1} \times \mathbb{R}_+ \times \mathbb{R}^3 \times \mathbb{R}^{N-1\times 3} \times \mathbb{R}^{3\times 3} \times \mathbb{R}^{3}$ and assumes values in $\mathbb{R}^{N-1}$. As a vector field, $g$ belongs to $C^1(\Gamma; \, \mathbb{R}^{N-1})$, because the maps $R$ and $\widetilde{M}$ are of class $C^2$ (Lemma \ref{rhonewlemma} and Lemma \ref{Mnewlemma}). The derivatives possess the following expressions
\begin{align*}
g_{\varrho} = & R_{\varrho,\varrho} \, \varrho \, \divv v - R_{\varrho,q} \, v \cdot \nabla q - \widetilde{M}_{\varrho,\varrho} \, \nabla \varrho \, \tilde{b}- \widetilde{M}_{\varrho,q} \, \nabla q \, \tilde{b} - \widetilde{M}_{\varrho} \, \divv \tilde{b} + \tilde{r}_{\varrho} \, , \\
g_q = & (R_{\varrho,q} \, \varrho - R_q) \, \divv v - R_{q,q} \, v \cdot \nabla q - \widetilde{M}_{\varrho,q} \, \nabla \varrho \, \tilde{b}
- \widetilde{M}_{q,q} \, \nabla q \, \tilde{b} - \widetilde{M}_{q} \, \divv \tilde{b} + \tilde{r}_{q}\, , \\
g_v = & - R_q \, \nabla q\, , \quad g_{\varrho_x} = - \widetilde{M}_{\varrho} \, \tilde{b}\, , \quad g_{q_x} = - R_q \, v - \widetilde{M}_{q} \, \tilde{b}\, , \quad g_{v_x} = (R_{\varrho} \, \varrho - R) \, \text{I}_{3\times 3}\, ,
\end{align*}
in which all non-linear functions $R, \, \widetilde{M}$, $\tilde{r}$ and their derivatives are evaluated at $\varrho, \, q$.
We next want to study the Nemicki operator $(q, \,\varrho, \, v) \mapsto g(x, \, t, \, q, \,\varrho, \, v, \, \nabla q, \, \nabla \varrho, \, \nabla v)$ on $\mathcal{X}_{T,+}$. A boundedness estimate can be obtained for this operator from $\mathcal{X}_{T,+}$ into $L^p(Q_T; \, \mathbb{R}^{N-1})$ via the Lemma \ref{RightHandControl} (this was applied for instance in the proof of Proposition \ref{estimateself}). We can apply the same tool to the derivatives. Choosing $G = g_{\varrho}$ in Lemma \ref{RightHandControl} with $r_1 = 1$, $\bar{G}(x, \,t) = |\tilde{b}_x(x, \, t)|$ and $\bar{H}(x, \,t) := |\tilde{b}(x, \, t)|$, we obtain a boundedness estimate for $g_{\varrho}$ as operator between $\mathcal{X}_{T,+}$ and $ L^p(Q_T; \, \mathbb{R}^{N-1})$. With obvious choices, we treat the derivatives $g_q$ and $g_v$ in the same way. Due to the simpler expressions, we obtain for the other derivatives continuity estimates:
\begin{align*}
\|g_{q_x}\|_{L^{\infty,p}(Q_T)}\leq & c_1((m(T))^{-1}, \, M(T), \, \|q\|_{L^{\infty}(Q_T)}) \, (\|v\|_{L^{\infty,p}(Q_T)} + \|\tilde{b}\|_{L^{\infty,p}(Q_T)}) \, ,\\
\|g_{\varrho_x}\|_{L^{\infty,p}(Q_T)}\leq & c_1((m(T))^{-1}, \, M(T), \, \|q\|_{L^{\infty}(Q_T)}) \, \|\tilde{b}\|_{L^{\infty,p}(Q_T)} \, ,\\
\|g_{v_x}\|_{L^{\infty}(Q_T)} \leq & c_1((m(T))^{-1}, \, M(T), \, \|q\|_{L^{\infty}(Q_T)}) \, .
\end{align*}
We also remark that if $u, \, u^*$ are two points in $\mathcal{X}_{T,+}$ and we expand $g(x, \, t, \, u, \, D_xu) = g(x, \, t, \, u^*, \, D_xu^*) + g^{\prime}(u, \, u^*) \, (u-u^*)$ (cf. \eqref{Arightlinear}), then the operators $g^{\prime}(u, \, u^*) = \int_{0}^1 g^{\prime}(x, \, t, \theta \, u + (1-\theta) \, u^*, \, \theta \, D_xu + (1-\theta) \, D_xu^*) \, d\theta$ satisfy similar estimates.
We consider next $f = f(x, \, t, \, q, \,\varrho, \, v, \, \nabla q, \, \nabla v,\, \nabla \varrho)$ with $f$ defined by \eqref{A3right}; $f$ belongs to $C^1(\Gamma; \, \mathbb{R}^{3})$. The derivatives possess the following expressions
\begin{align*}
f_{\varrho} = & - P_{\varrho,\varrho} \, \nabla \varrho - P_{\varrho,q} \, \nabla q - (v\cdot \nabla) v + R_{\varrho} \, \tilde{b} + \bar{b} \, , \\
f_q = & -P_{\varrho,q} \, \nabla \varrho - P_{q,q} \, \nabla q + R_q \, \tilde{b}\, , \\
f_v = & - \varrho \, \nabla v\, , \quad f_{\varrho_x} = - P_{\varrho}\, , \quad f_{q_x} = - P_q \, , \quad f_{v_x} = - \varrho \, v \, .
\end{align*}
We discuss these derivatives as Nemicki operators on $\mathcal{X}_{T,+}$ with similar arguments as in the case of $g$. We resume our conclusions in the following Lemma.
\begin{lemma}\label{gedifferential}
Adopt the assumptions of Theorem \ref{MAIN}. The maps $g$ and $f$ are defined on $\mathcal{X}_{T,+}$ by the expressions \eqref{A1right} and \eqref{A3right}. Then, $g$ and $f$ are continuously differentiable at every $u^* = (q^*, \, \varrho^*, \, v^*) \in \mathcal{X}_{T,+}$. For each $u =(q, \, \varrho, \, v) \in \mathcal{X}_{T,+}$ the derivatives satisfy
\begin{align*}
& \|g_{q}(u, \, u^*)\|_{L^p(Q_T)} + \|g_{\varrho}(u, \, u^*)\|_{L^p(Q_T)} + \|g_v(u, \, u^*) \|_{L^p(Q_T)}\\
& + \| g_{q_x}(u, \, u^*)\|_{L^{\infty,p}(Q_T)} + \|g_{v_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|g_{\varrho_x}(u, \, u^*)\|_{L^{\infty,p}(Q_T)} \\
& \qquad \leq c_1(\|u\|_{\mathcal{X}_T} + \|u^*\|_{\mathcal{X}_T} + [\inf_{Q_T} \inf\{ \varrho, \, \varrho^*\}]^{-1} + \|\tilde{b}\|_{W^{1,0}_p(Q_T)}) \, ,\\
& \|f_{q}(u, \, u^*)\|_{L^p(Q_T)} + \|f_{\varrho}(u, \, u^*)\|_{L^p(Q_T)} + \|f_v(u, \, u^*) \|_{L^p(Q_T)}\\
& + \| f_{q_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|f_{v_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|f_{\varrho_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} \\
& \qquad \leq c_1(\|u\|_{\mathcal{X}_T} + \|u^*\|_{\mathcal{X}_T} + [\inf_{Q_T} \inf\{ \varrho, \, \varrho^*\}]^{-1} + \|\tilde{b}\|_{L^p(Q_T)} + \|\bar{b}\|_{L^p(Q_T)}) \, ,
\end{align*}
with a continuous function $c_1$ increasing of its argument.
\end{lemma}
We can extend this statement to the maps $g^1$ and $f^1$ introduced in the system \eqref{equationdiff1}, \eqref{equationdiff3}. Recall first that
\begin{align}\label{g1}
g^1 = g - R_q \, \partial_t \hat{q}^0 + \widetilde{M} \, \triangle \hat{q}^0 - \widetilde{M}_{\varrho} \, \nabla \varrho \cdot \nabla \hat{q}^0 \, ,
\end{align}
in which $\hat{q}^0 \in W^{2,1}_p(Q_T; \, \mathbb{R}^{N-1})$ is a given vector field. The boundedness of $g^1$ on $\mathcal{X}_{T,+}$ into $L^p(Q_T)$ is then readily verified (Lemma \ref{RightHandControl}). The derivatives satisfy
\begin{align*}
g^1_{\varrho} = & g_{\varrho} - R_{\varrho,q} \, \partial_t \hat{q}^0+ \widetilde{M}_{\varrho} \, \triangle \hat{q}^0 - \widetilde{M}_{\varrho,\varrho} \nabla \varrho \nabla \hat{q}^0 - \widetilde{M}_{\varrho,q} \, \nabla q \, \nabla \hat{q}^0 \, , \\
g_q^1 = & g_q - R_{q,q} \, \partial_t \hat{q}^0+ \widetilde{M}_{q} \, \triangle \hat{q}^0 - \widetilde{M}_{\varrho,q} \nabla \varrho \nabla \hat{q}^0 - \widetilde{M}_{q,q} \, \nabla q \, \nabla \hat{q}^0 \, , \\
g_{\varrho_x}^1 = & g_{\varrho_x} - \widetilde{M}_{\varrho} \, \nabla \hat{q}^0\, , \quad g_{q_x}^1 = g_{q_x} - \widetilde{M}_{q} \, \nabla \hat{q}^0 \, ,
\end{align*}
and $g_v^1 = g_v$ and $g^1_{v_x} = g_{v_x}$. These expressions can be estimated as in the case of $g$ (replace $\tilde{b}$ in these estimates by $\nabla \hat{q}^0$). Since $f^1 = f- \varrho \partial_t \hat{v}^0 + \divv \mathbb{S}(\nabla \hat{v}^0)$, only the derivative $f^1_{\varrho} = f_{\varrho} - \partial_t \hat{v}^0$ gets a new contribution that is easily estimated.
\begin{lemma}\label{gedifferentialun}
Adopt the assumptions of Theorem \ref{MAIN}. The maps $g$ and $f$ are defined on $\mathcal{X}_{T,+}$ by the expressions \eqref{A1right} and \eqref{A3right} and $g^1$ is defined via \eqref{g1} and $f^1 = f - \varrho \partial_t \hat{v}^0 + \divv \mathbb{S}(\nabla \hat{v}^0)$, in which $(\hat{q}^0, \, \hat{v}^0)$ is a given pair in $\mathcal{Y}_T$. Then, $g^1$ and $f^1$ are continuously differentiable at every $u^* = (q^*, \, \varrho^*, \, v^*) \in \mathcal{X}_{T,+}$. For each $u =(q, \, \varrho, \, v) \in \mathcal{X}_{T,+}$ the derivatives satisfy
\begin{align*}
& \|g^1_{q}(u, \, u^*)\|_{L^p(Q_T)} + \|g^1_{\varrho}(u, \, u^*)\|_{L^p(Q_T)} + \|g^1_v(u, \, u^*) \|_{L^p(Q_T)}\\
& + \| g^1_{q_x}(u, \, u^*)\|_{L^{\infty,p}(Q_T)} + \|g^1_{v_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|g^1_{\varrho_x}(u, \, u^*)\|_{L^{\infty,p}(Q_T)} \\
& \qquad \leq c_1(\|u\|_{\mathcal{X}_T} + \|u^*\|_{\mathcal{X}_T} + [\inf_{Q_T} \inf\{ \varrho, \, \varrho^*\}]^{-1} + \|\tilde{b}\|_{W^{1,0}_p(Q_T)}+\|\hat{q}^0\|_{W^{2,1}_p(Q_T)}) \, ,\\
& \|f_{q}^1(u, \, u^*)\|_{L^p(Q_T)} + \|f_{\varrho}^1(u, \, u^*)\|_{L^p(Q_T)} + \|f^1_v(u, \, u^*) \|_{L^p(Q_T)}\\
& + \| f^1_{q_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|f^1_{v_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} + \|f^1_{\varrho_x}(u, \, u^*)\|_{L^{\infty}(Q_T)} \\
& \qquad \leq c_1(\|u\|_{\mathcal{X}_T} + \|u^*\|_{\mathcal{X}_T} + [\inf_{Q_T} \inf\{ \varrho, \, \varrho^*\}]^{-1} + \|\tilde{b}\|_{L^p(Q_T)} + \|\bar{b}\|_{L^p(Q_T)} + \|\hat{v}^0\|_{W^{2,1}_p(Q_T)}) \, ,
\end{align*}
with a continuous function $c_1$ increasing of its argument.
\end{lemma} | {"config": "arxiv", "file": "2001.08970.tex"} |